Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
vm01.local vps True True 2026-03-11 12:58:09.969914 irq0 rocky 9.7 x86_64 /archive/irq0-2026-03-11_11:43:20-rgw:verify-cobaltcore-storage-v19.2.3-fasttrack-10-none-default-vps/1237
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
running 1237 2026-03-11 11:43:26 2026-03-11 12:56:37 2026-03-11 13:41:13 0:46:07 vps clyso-debian-13 rocky 9.7 rgw:verify/{0-install accounts$/{none} clusters/fixed-2 datacache/rgw-datacache frontend/beast ignore-pg-availability inline-data$/{on} msgr-failures/few objectstore/bluestore-bitmap overrides proto/http rgw_pool_type/ec-profile s3tests-branch sharding$/{single} striping$/{stripe-equals-chunk} supported-random-distro$/{rocky_latest} tasks/{bucket-check cls mp_reupload rados-pool-quota ragweed reshard s3tests versioning zzz-s3tests-java} validater/valgrind} 2
fail 1235 2026-03-11 11:43:25 2026-03-11 12:36:01 2026-03-11 12:57:59 0:21:58 0:12:40 0:09:18 vps clyso-debian-13 rocky 9.7 rgw:verify/{0-install accounts$/{tenant} clusters/fixed-2 datacache/rgw-datacache frontend/beast ignore-pg-availability inline-data$/{off} msgr-failures/few objectstore/bluestore-bitmap overrides proto/http rgw_pool_type/ec s3tests-branch sharding$/{default} striping$/{stripe-greater-than-chunk} supported-random-distro$/{rocky_latest} tasks/{bucket-check cls mp_reupload rados-pool-quota ragweed reshard s3tests versioning zzz-s3tests-java} validater/valgrind} 2
Failure Reason:

Command failed (workunit test rgw/run-bucket-check.sh) on vm01 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=9bcb89c865ca9ce51bf072c6ff79cca63cd02356 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/run-bucket-check.sh'

pass 1232 2026-03-11 11:43:23 2026-03-11 11:57:59 2026-03-11 12:42:58 0:44:59 0:36:20 0:08:39 vps clyso-debian-13 rocky 9.7 rgw:verify/{0-install accounts$/{main-tenant} clusters/fixed-2 datacache/no_datacache frontend/beast ignore-pg-availability inline-data$/{on} msgr-failures/few objectstore/bluestore-bitmap overrides proto/http rgw_pool_type/ec s3tests-branch sharding$/{single} striping$/{stripe-equals-chunk} supported-random-distro$/{rocky_latest} tasks/{bucket-check cls mp_reupload rados-pool-quota ragweed reshard s3tests versioning zzz-s3tests-java} validater/lockdep} 2
pass 1223 2026-03-11 11:43:12 2026-03-11 11:43:13 2026-03-11 12:03:51 0:20:38 0:18:38 0:02:00 vps clyso-debian-13 rocky 9.7 rgw:tempest/{0-install clusters/fixed-1 frontend/beast ignore-pg-availability overrides rocky_latest s3tests-branch tasks/s3/{auth-order/external-local s3tests}} 1
dead 1209 2026-03-11 09:29:07 2026-03-11 11:20:55 2026-03-11 11:37:51 0:16:56 vps clyso-debian-13 rocky 9.7 rgw:verify/{0-install accounts$/{tenant} clusters/fixed-2 datacache/rgw-datacache frontend/beast ignore-pg-availability inline-data$/{on} msgr-failures/few objectstore/bluestore-bitmap overrides proto/https rgw_pool_type/ec s3tests-branch sharding$/{default} striping$/{stripe-greater-than-chunk} supported-random-distro$/{rocky_latest} tasks/{bucket-check cls mp_reupload rados-pool-quota ragweed reshard s3tests versioning zzz-s3tests-java} validater/lockdep} 2
fail 1204 2026-03-11 09:29:04 2026-03-11 11:03:43 2026-03-11 11:22:42 0:18:59 0:15:17 0:03:42 vps clyso-debian-13 rocky 9.7 rgw:verify/{0-install accounts$/{main-tenant} clusters/fixed-2 datacache/no_datacache frontend/beast ignore-pg-availability inline-data$/{off-no-gc} msgr-failures/few objectstore/bluestore-bitmap overrides proto/https rgw_pool_type/replicated s3tests-branch sharding$/{single} striping$/{stripe-greater-than-chunk} supported-random-distro$/{rocky_latest} tasks/{bucket-check cls mp_reupload rados-pool-quota ragweed reshard s3tests versioning zzz-s3tests-java} validater/valgrind} 2
Failure Reason:

Command failed (workunit test rgw/test_librgw_file.sh) on vm01 with status 139: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=50e98e8318117ef866947b5847d947538c5efcdc TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_librgw_file.sh'

fail 1199 2026-03-11 09:29:02 2026-03-11 10:44:28 2026-03-11 11:03:37 0:19:09 0:14:37 0:04:32 vps clyso-debian-13 rocky 9.7 rgw:verify/{0-install accounts$/{none} clusters/fixed-2 datacache/rgw-datacache frontend/beast ignore-pg-availability inline-data$/{off} msgr-failures/few objectstore/bluestore-bitmap overrides proto/http rgw_pool_type/ec-profile s3tests-branch sharding$/{single} striping$/{stripe-greater-than-chunk} supported-random-distro$/{rocky_latest} tasks/{bucket-check cls mp_reupload rados-pool-quota ragweed reshard s3tests versioning zzz-s3tests-java} validater/lockdep} 2
Failure Reason:

Command failed (workunit test rgw/test_librgw_file.sh) on vm01 with status 139: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=50e98e8318117ef866947b5847d947538c5efcdc TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_librgw_file.sh'

fail 1194 2026-03-11 09:29:00 2026-03-11 10:26:20 2026-03-11 10:46:33 0:20:13 0:14:57 0:05:16 vps clyso-debian-13 rocky 9.7 rgw:verify/{0-install accounts$/{main-tenant} clusters/fixed-2 datacache/rgw-datacache frontend/beast ignore-pg-availability inline-data$/{off} msgr-failures/few objectstore/bluestore-bitmap overrides proto/http rgw_pool_type/ec s3tests-branch sharding$/{single} striping$/{stripe-greater-than-chunk} supported-random-distro$/{rocky_latest} tasks/{bucket-check cls mp_reupload rados-pool-quota ragweed reshard s3tests versioning zzz-s3tests-java} validater/valgrind} 2
Failure Reason:

Command failed (workunit test rgw/test_librgw_file.sh) on vm01 with status 139: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=50e98e8318117ef866947b5847d947538c5efcdc TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_librgw_file.sh'

fail 1188 2026-03-11 09:28:57 2026-03-11 10:08:13 2026-03-11 10:27:18 0:19:05 0:16:00 0:03:05 vps clyso-debian-13 rocky 9.7 rgw:verify/{0-install accounts$/{main} clusters/fixed-2 datacache/rgw-datacache frontend/beast ignore-pg-availability inline-data$/{on} msgr-failures/few objectstore/bluestore-bitmap overrides proto/https rgw_pool_type/ec s3tests-branch sharding$/{default} striping$/{stripe-greater-than-chunk} supported-random-distro$/{rocky_latest} tasks/{bucket-check cls mp_reupload rados-pool-quota ragweed reshard s3tests versioning zzz-s3tests-java} validater/valgrind} 2
Failure Reason:

Command failed (workunit test rgw/test_librgw_file.sh) on vm01 with status 139: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=50e98e8318117ef866947b5847d947538c5efcdc TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_librgw_file.sh'

pass 1183 2026-03-11 09:26:16 2026-03-11 09:27:28 2026-03-11 10:06:58 0:39:30 0:37:00 0:02:30 vps clyso-debian-13 rocky 9.7 smoke:basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-all-distro/rocky_latest tasks/{0-install test/rados_bench}} 3
fail 1175 2026-03-10 23:48:07 2026-03-10 23:59:28 2026-03-11 00:51:41 0:52:13 0:42:07 0:10:06 vps clyso-debian-13 rocky 9.7 smoke:basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-all-distro/rocky_latest tasks/{0-install test/rados_bench}} 3
Failure Reason:

"2026-03-11T00:43:02.068935+0000 mon.a (mon.0) 4392 : cluster [ERR] Health check failed: mon c is very low on available space (MON_DISK_CRIT)" in cluster log

pass 1169 2026-03-10 21:34:02 2026-03-10 23:20:21 2026-03-11 00:06:11 0:45:50 0:19:09 0:26:41 vps clyso-debian-13 rocky 9.7 smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-all-distro/rocky_latest tasks/{0-install test/rbd_python_api_tests}} 3
pass 1168 2026-03-10 21:34:02 2026-03-10 23:11:06 2026-03-10 23:28:22 0:17:16 0:06:11 0:11:05 vps clyso-debian-13 rocky 9.7 smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-all-distro/rocky_latest tasks/{0-install test/rbd_fsx}} 3
pass 1167 2026-03-10 21:34:01 2026-03-10 23:08:51 2026-03-10 23:19:07 0:10:16 0:06:12 0:04:04 vps clyso-debian-13 rocky 9.7 smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-all-distro/rocky_latest tasks/{0-install test/rbd_cli_import_export}} 3
fail 1161 2026-03-10 21:33:59 2026-03-10 22:24:43 2026-03-10 23:09:32 0:44:49 0:41:15 0:03:34 vps clyso-debian-13 rocky 9.7 smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-all-distro/rocky_latest tasks/{0-install test/rados_bench}} 3
Failure Reason:

"2026-03-10T22:50:40.009179+0000 mon.a (mon.0) 2728 : cluster [ERR] Health check failed: mon a is very low on available space (MON_DISK_CRIT)" in cluster log

pass 1155 2026-03-10 21:33:57 2026-03-10 21:49:52 2026-03-10 22:25:51 0:35:59 0:32:06 0:03:53 vps clyso-debian-13 rocky 9.7 smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-all-distro/rocky_latest tasks/{0-install test/kclient_workunit_suites_dbench}} 3
pass 1152 2026-03-10 21:33:55 2026-03-10 21:36:33 2026-03-10 21:50:03 0:13:30 0:09:57 0:03:33 vps clyso-debian-13 rocky 9.7 smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-all-distro/rocky_latest tasks/{0-install test/cfuse_workunit_suites_iozone}} 3
pass 1147 2026-03-10 18:08:51 2026-03-10 20:41:55 2026-03-10 21:02:50 0:20:55 0:12:36 0:08:19 vps clyso-debian-13 rocky 9.7 smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-all-distro/rocky_latest tasks/{0-install test/rgw_s3tests}} 3
pass 1138 2026-03-10 18:08:47 2026-03-10 19:33:41 2026-03-10 20:07:12 0:33:31 0:16:28 0:17:03 vps clyso-debian-13 rocky 9.7 smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-all-distro/rocky_latest tasks/{0-install test/rados_ec_snaps}} 3
pass 1137 2026-03-10 18:08:46 2026-03-10 19:30:09 2026-03-10 19:47:42 0:17:33 0:12:51 0:04:42 vps clyso-debian-13 rocky 9.7 smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-all-distro/rocky_latest tasks/{0-install test/rados_cls_all}} 3