| User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
|---|---|---|---|---|---|---|---|---|---|---|---|
| kyr | 2026-03-31 11:18:10 | 2026-03-31 23:05:37 | 2026-03-31 23:25:49 | 0:20:12 | rados | tentacle | vps | 5bb3278 | 50 | 11 | 7 |
| Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| pass | 4298 |
|
2026-03-31 11:18:32 | 2026-03-31 11:18:33 | 2026-03-31 11:27:37 | 0:09:04 | 0:06:45 | 0:02:19 | vps | uv2 | centos | 9.stream | rados/multimon/{clusters/3 mon_election/connectivity msgr-failures/many msgr/async no_pools objectstore/{bluestore/{alloc$/{bitmap} base mem$/{low} onode-segment$/{512K} write$/{v1/{compr$/{yes$/{lz4}} v1}}}} rados supported-random-distro$/{centos_latest} tasks/mon_recovery} | 2 |
| fail | 4299 |
|
2026-03-31 11:18:33 | 2026-03-31 11:19:36 | 2026-03-31 11:34:58 | 0:15:22 | 0:12:56 | 0:02:26 | vps | uv2 | centos | 9.stream | rados/mgr/{clusters/{2-node-mgr} debug/mgr distro/{centos_latest} mgr_ttl_cache/enable mon_election/classic random-objectstore$/{bluestore/{alloc$/{avl} base mem$/{low} onode-segment$/{1M} write$/{random/{compr$/{no$/{no}} random}}}} tasks/{1-install 2-ceph 3-mgrmodules 4-units/failover}} | 2 |
Failure Reason:
Test failure: test_maybe_reonnect (tasks.mgr.test_failover.TestLibCephSQLiteFailover) |
||||||||||||||
| pass | 4300 |
|
2026-03-31 11:18:33 | 2026-03-31 11:20:56 | 2026-03-31 11:40:50 | 0:19:54 | 0:10:09 | 0:09:45 | vps | uv2 | ubuntu | 22.04 | rados/cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_extra_daemon_features} | 2 |
| dead | 4301 |
|
2026-03-31 11:18:34 | 2026-03-31 11:28:49 | 2026-03-31 13:32:11 | 2:03:22 | vps | uv2 | centos | 9.stream | rados/singleton/{all/backfill-toofull mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/{bluestore/{alloc$/{avl} base mem$/{normal-1} onode-segment$/{512K} write$/{v1/{compr$/{no$/{no}} v1}}}} rados supported-random-distro$/{centos_latest}} | 1 | ||
Failure Reason:
hit max job timeout |
||||||||||||||
| pass | 4302 |
|
2026-03-31 11:18:34 | 2026-03-31 11:30:01 | 2026-03-31 11:42:33 | 0:12:32 | 0:05:33 | 0:06:59 | vps | uv2 | ubuntu | 22.04 | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/mon-delay msgr/async-v2only objectstore/{bluestore/{alloc$/{hybrid} base mem$/{low} onode-segment$/{512K-onoff} write$/{random/{compr$/{no$/{no}} random}}}} rados supported-random-distro$/{ubuntu_latest} thrashers/one workloads/rados_5925} | 2 |
| pass | 4303 |
|
2026-03-31 11:18:34 | 2026-03-31 11:36:32 | 2026-03-31 12:08:45 | 0:32:13 | 0:22:42 | 0:09:31 | vps | uv2 | ubuntu | 22.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-4} ec_optimizations/ec_optimizations_on fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-snaps-few-objects-overwrites} | 4 |
| pass | 4304 |
|
2026-03-31 11:18:35 | 2026-03-31 11:44:43 | 2026-03-31 12:22:29 | 0:37:46 | 0:10:03 | 0:27:43 | vps | uv2 | centos | 9.stream | rados/thrash-old-clients/{0-distro$/{centos_9.stream} 0-size-min-size-overrides/3-size-2-min-size 1-install/squid backoff/peering ceph clusters/{three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/few rados thrashers/default thrashosds-health workloads/test_rbd_api} | 3 |
| pass | 4305 |
|
2026-03-31 11:18:35 | 2026-03-31 12:10:28 | 2026-03-31 12:37:37 | 0:27:09 | 0:13:18 | 0:13:51 | vps | uv2 | ubuntu | 22.04 | rados/basic/{ceph clusters/{fixed-2} mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/{bluestore/{alloc$/{bitmap} base mem$/{low} onode-segment$/{none} write$/{v1/{compr$/{yes$/{snappy}} v1}}}} rados supported-random-distro$/{ubuntu_latest} tasks/rados_api_tests} | 2 |
| pass | 4306 |
|
2026-03-31 11:18:36 | 2026-03-31 12:23:35 | 2026-03-31 13:17:38 | 0:54:03 | 0:37:28 | 0:16:35 | vps | uv2 | centos | 9.stream | rados/thrash-erasure-code/{ceph clusters/{fixed-4} ec_optimizations/ec_optimizations_on fast/normal mon_election/connectivity msgr-failures/few objectstore/{bluestore/{alloc$/{bitmap} base mem$/{normal-2} onode-segment$/{none} write$/{v2/{compr$/{yes$/{zstd}} v2}}}} rados recovery-overrides/{default} supported-random-distro$/{centos_latest} thrashers/minsize_recovery thrashosds-health workloads/ec-rados-plugin=jerasure-k=2-m=1} | 4 |
| pass | 4307 |
|
2026-03-31 11:18:36 | 2026-03-31 12:39:36 | 2026-03-31 13:43:20 | 1:03:44 | 0:23:00 | 0:40:44 | vps | uv2 | ubuntu | 22.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds} ec_optimizations/ec_optimizations_on mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/{bluestore/{alloc$/{hybrid} base mem$/{normal-1} onode-segment$/{none} write$/{v1/{compr$/{no$/{no}} v1}}}} rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 |
| pass | 4308 |
|
2026-03-31 11:18:36 | 2026-03-31 13:19:19 | 2026-03-31 14:11:29 | 0:52:10 | 0:24:28 | 0:27:42 | vps | uv2 | centos | 9.stream | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-4} ec_optimizations/ec_optimizations_on mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/{bluestore/{alloc$/{hybrid} base mem$/{low} onode-segment$/{512K-onoff} write$/{v1/{compr$/{yes$/{snappy}} v1}}}} rados recovery-overrides/{default} supported-random-distro$/{centos_latest} thrashers/careful_host thrashosds-health workloads/ec-rados-plugin=isa-k=6-m=3} | 4 |
| dead | 4309 |
|
2026-03-31 11:18:37 | 2026-03-31 13:45:27 | 2026-03-31 16:15:46 | 2:30:19 | vps | uv2 | centos | 9.stream | rados/thrash-erasure-code-crush-4-nodes/{arch/x86_64 ceph clusters/{fixed-4} ec_optimizations/ec_optimizations_on mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/{bluestore/{alloc$/{stupid} base mem$/{normal-2} onode-segment$/{512K} write$/{random/{compr$/{no$/{no}} random}}}} rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_latest} thrashers/careful_host thrashosds-health workloads/ec-rados-plugin=jerasure-k=8-m=6-crush} | 4 | ||
Failure Reason:
hit max job timeout |
||||||||||||||
| pass | 4310 |
|
2026-03-31 11:18:37 | 2026-03-31 14:13:37 | 2026-03-31 16:34:49 | 2:21:12 | 0:18:17 | 2:02:55 | vps | uv2 | ubuntu | 22.04 | rados/cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 |
| fail | 4311 |
|
2026-03-31 11:18:38 | 2026-03-31 16:14:47 | 2026-03-31 16:40:34 | 0:25:47 | 0:22:26 | 0:03:21 | vps | uv2 | centos | 9.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-stupid} tasks/dashboard} | 2 |
Failure Reason:
Test failure: test_list_enabled_module (tasks.mgr.dashboard.test_mgr_module.MgrModuleTest) |
||||||||||||||
| fail | 4312 |
|
2026-03-31 11:18:38 | 2026-03-31 16:16:32 | 2026-03-31 16:23:17 | 0:06:45 | 0:04:34 | 0:02:11 | vps | uv2 | ubuntu | 22.04 | rados/encoder/{0-start 1-tasks supported-random-distro$/{ubuntu_latest}} | 1 |
Failure Reason:
"grep: /var/log/ceph/a3a63070-2d1d-11f1-b397-1d7fa9085bf3/ceph.log: No such file or directory" in cluster log |
||||||||||||||
| pass | 4313 |
|
2026-03-31 11:18:38 | 2026-03-31 16:17:15 | 2026-03-31 16:36:04 | 0:18:49 | 0:10:28 | 0:08:21 | vps | uv2 | centos | 9.stream | rados/objectstore/{backends/ceph_objectstore_tool supported-random-distro$/{centos_latest}} | 1 |
| pass | 4314 |
|
2026-03-31 11:18:39 | 2026-03-31 16:24:02 | 2026-03-31 16:45:29 | 0:21:27 | 0:09:25 | 0:12:02 | vps | uv2 | ubuntu | 22.04 | rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 |
| pass | 4315 |
|
2026-03-31 11:18:39 | 2026-03-31 16:35:28 | 2026-03-31 16:44:06 | 0:08:38 | 0:07:19 | 0:01:19 | vps | uv2 | ubuntu | 22.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/c2c} | 1 |
| fail | 4316 |
|
2026-03-31 11:18:40 | 2026-03-31 16:36:05 | 2026-03-31 16:47:53 | 0:11:48 | 0:04:59 | 0:06:49 | vps | uv2 | ubuntu | 22.04 | rados/upgrade/parallel/{0-random-distro$/{ubuntu_22.04} 0-start 1-tasks mon_election/classic overrides/ignorelist_health upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 |
Failure Reason:
"grep: /var/log/ceph/ffcf3cfe-2d20-11f1-b472-63d48654c27e/ceph.log: No such file or directory" in cluster log |
||||||||||||||
| pass | 4317 |
|
2026-03-31 11:18:40 | 2026-03-31 16:41:52 | 2026-03-31 16:56:55 | 0:15:03 | 0:12:25 | 0:02:38 | vps | uv2 | centos | 9.stream | rados/valgrind-leaks/{1-start 2-inject-leak/mon centos_latest} | 1 |
| pass | 4318 |
|
2026-03-31 11:18:40 | 2026-03-31 16:42:53 | 2026-03-31 17:18:07 | 0:35:14 | 0:27:10 | 0:08:04 | vps | uv2 | ubuntu | 22.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-5} backoff/normal ceph clusters/{fixed-4} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/{bluestore/{alloc$/{hybrid} base mem$/{normal-2} onode-segment$/{512K-onoff} write$/{random/{compr$/{yes$/{snappy}} random}}}} rados supported-random-distro$/{ubuntu_latest} thrashers/careful_host thrashosds-health workloads/pool-snaps-few-objects} | 4 |
| pass | 4319 |
|
2026-03-31 11:18:41 | 2026-03-31 16:50:05 | 2026-03-31 17:05:50 | 0:15:45 | 0:07:10 | 0:08:35 | vps | uv2 | ubuntu | 22.04 | rados/singleton/{all/deduptool mon_election/classic msgr-failures/none msgr/async objectstore/{bluestore/{alloc$/{avl} base mem$/{low} onode-segment$/{512K} write$/{random/{compr$/{yes$/{lz4}} random}}}} rados supported-random-distro$/{ubuntu_latest}} | 1 |
| pass | 4320 |
|
2026-03-31 11:18:41 | 2026-03-31 16:57:49 | 2026-03-31 17:27:33 | 0:29:44 | 0:06:20 | 0:23:24 | vps | uv2 | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/deploy-raw} | 2 |
| pass | 4321 |
|
2026-03-31 11:18:42 | 2026-03-31 17:19:32 | 2026-03-31 17:28:17 | 0:08:45 | 0:06:27 | 0:02:18 | vps | uv2 | ubuntu | 22.04 | rados/singleton-nomsgr/{all/balancer mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 |
| pass | 4322 |
|
2026-03-31 11:18:42 | 2026-03-31 17:20:16 | 2026-03-31 17:27:00 | 0:06:44 | 0:05:34 | 0:01:10 | vps | uv2 | centos | 9.stream | rados/singleton/{all/divergent_priors mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/{bluestore/{alloc$/{btree} base mem$/{normal-1} onode-segment$/{256K} write$/{v1/{compr$/{no$/{no}} v1}}}} rados supported-random-distro$/{centos_latest}} | 1 |
| pass | 4323 |
|
2026-03-31 11:18:42 | 2026-03-31 17:20:59 | 2026-03-31 17:28:02 | 0:07:03 | 0:04:54 | 0:02:09 | vps | uv2 | centos | 9.stream | rados/cephadm/smoke-singlehost/{0-random-distro$/{centos_9.stream} 1-start 2-services/basic 3-final} | 1 |
| pass | 4324 |
|
2026-03-31 11:18:43 | 2026-03-31 17:22:01 | 2026-03-31 17:35:59 | 0:13:58 | 0:07:01 | 0:06:57 | vps | uv2 | ubuntu | 22.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-basic-min-osd-mem-target scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/fio_4M_rand_read} | 1 |
| pass | 4325 |
|
2026-03-31 11:18:43 | 2026-03-31 17:27:58 | 2026-03-31 17:37:09 | 0:09:11 | 0:06:33 | 0:02:38 | vps | uv2 | ubuntu | 22.04 | rados/mgr/{clusters/{2-node-mgr} debug/mgr distro/{ubuntu_latest} mgr_ttl_cache/disable mon_election/connectivity random-objectstore$/{bluestore/{alloc$/{hybrid} base mem$/{low} onode-segment$/{none} write$/{v2/{compr$/{yes$/{lz4}} v2}}}} tasks/{1-install 2-ceph 3-mgrmodules 4-units/insights}} | 2 |
| pass | 4326 |
|
2026-03-31 11:18:44 | 2026-03-31 17:29:08 | 2026-03-31 17:45:35 | 0:16:27 | 0:07:35 | 0:08:52 | vps | uv2 | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_host_drain} | 3 |
| pass | 4327 |
|
2026-03-31 11:18:44 | 2026-03-31 17:37:34 | 2026-03-31 17:44:18 | 0:06:44 | 0:05:54 | 0:00:50 | vps | uv2 | centos | 9.stream | rados/singleton/{all/divergent_priors2 mon_election/classic msgr-failures/many msgr/async-v2only objectstore/{bluestore/{alloc$/{avl} base mem$/{normal-1} onode-segment$/{256K} write$/{v1/{compr$/{no$/{no}} v1}}}} rados supported-random-distro$/{centos_latest}} | 1 |
| pass | 4328 |
|
2026-03-31 11:18:44 | 2026-03-31 17:38:17 | 2026-03-31 18:05:47 | 0:27:30 | 0:16:20 | 0:11:10 | vps | uv2 | centos | 9.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/peering ceph clusters/{fixed-4} crc-failures/bad_map_crc_failure d-balancer/read mon_election/classic msgr-failures/osd-delay msgr/async-v2only objectstore/{bluestore/{alloc$/{stupid} base mem$/{normal-1} onode-segment$/{none} write$/{v1/{compr$/{yes$/{zstd}} v1}}}} rados supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/rados_api_tests} | 4 |
| pass | 4329 |
|
2026-03-31 11:18:45 | 2026-03-31 17:47:45 | 2026-03-31 17:54:29 | 0:06:44 | 0:04:27 | 0:02:17 | vps | uv2 | centos | 9.stream | rados/singleton-nomsgr/{all/ceph-kvstore-tool mon_election/classic rados supported-random-distro$/{centos_latest}} | 1 |
| fail | 4330 |
|
2026-03-31 11:18:45 | 2026-03-31 17:48:28 | 2026-03-31 18:15:01 | 0:26:33 | 0:07:32 | 0:19:01 | vps | uv2 | ubuntu | 22.04 | rados/basic/{ceph clusters/{fixed-2} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/{bluestore/{alloc$/{stupid} base mem$/{normal-1} onode-segment$/{1M} write$/{random/{compr$/{yes$/{zlib}} random}}}} rados supported-random-distro$/{ubuntu_latest} tasks/rados_cls_all} | 2 |
Failure Reason:
Command failed (workunit test cls/test_cls_2pc_queue.sh) on vm08 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0392f78529848ec72469e8e431875cb98d3a5fb4 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_2pc_queue.sh' |
||||||||||||||
| fail | 4331 |
|
2026-03-31 11:18:46 | 2026-03-31 18:07:00 | 2026-03-31 18:15:45 | 0:08:45 | 0:07:39 | 0:01:06 | vps | uv2 | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_iscsi_container/{centos_9.stream test_iscsi_container}} | 1 |
Failure Reason:
Command failed (workunit test cephadm/test_iscsi_pids_limit.sh) on vm06 with status 125: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0392f78529848ec72469e8e431875cb98d3a5fb4 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_iscsi_pids_limit.sh' |
||||||||||||||
| pass | 4332 |
|
2026-03-31 11:18:46 | 2026-03-31 18:07:44 | 2026-03-31 18:14:28 | 0:06:44 | 0:05:35 | 0:01:09 | vps | uv2 | centos | 9.stream | rados/singleton/{all/dump-stuck mon_election/connectivity msgr-failures/none msgr/async objectstore/{bluestore/{alloc$/{stupid} base mem$/{low} onode-segment$/{none} write$/{v1/{compr$/{yes$/{zlib}} v1}}}} rados supported-random-distro$/{centos_latest}} | 1 |
| pass | 4333 |
|
2026-03-31 11:18:46 | 2026-03-31 18:08:27 | 2026-03-31 18:41:16 | 0:32:49 | 0:23:04 | 0:09:45 | vps | uv2 | centos | 9.stream | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-4} ec_optimizations/ec_optimizations_off mon_election/classic msgr-failures/fastclose objectstore/{bluestore/{alloc$/{avl} base mem$/{low} onode-segment$/{512K-onoff} write$/{v1/{compr$/{no$/{no}} v1}}}} rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=isa-k=10-m=4} | 4 |
| pass | 4334 |
|
2026-03-31 11:18:47 | 2026-03-31 18:17:14 | 2026-03-31 19:01:36 | 0:44:22 | 0:16:49 | 0:27:33 | vps | uv2 | centos | 9.stream | rados/thrash-erasure-code-crush-4-nodes/{arch/x86_64 ceph clusters/{fixed-4} ec_optimizations/ec_optimizations_off mon_election/classic msgr-failures/fastclose objectstore/{bluestore/{alloc$/{hybrid} base mem$/{normal-2} onode-segment$/{1M} write$/{random/{compr$/{yes$/{snappy}} random}}}} rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=2-m=2-crush} | 4 |
| dead | 4335 |
|
2026-03-31 11:18:47 | 2026-03-31 18:43:35 | 2026-03-31 20:58:50 | 2:15:15 | vps | uv2 | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/repave-all} | 2 | ||
Failure Reason:
hit max job timeout |
||||||||||||||
| fail | 4336 |
|
2026-03-31 11:18:48 | 2026-03-31 18:56:42 | 2026-03-31 19:05:33 | 0:08:51 | 0:05:58 | 0:02:53 | vps | uv2 | centos | 9.stream | rados/singleton-nomsgr/{all/ceph-post-file mon_election/connectivity rados supported-random-distro$/{centos_latest}} | 1 |
Failure Reason:
Command failed (workunit test post-file.sh) on vm01 with status 255: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0392f78529848ec72469e8e431875cb98d3a5fb4 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/post-file.sh' |
||||||||||||||
| fail | 4337 |
|
2026-03-31 11:18:48 | 2026-03-31 18:57:31 | 2026-03-31 19:04:11 | 0:06:40 | 0:04:21 | 0:02:19 | vps | uv2 | ubuntu | 22.04 | rados/objectstore/{backends/ceph_test_bluefs supported-random-distro$/{ubuntu_latest}} | 1 |
Failure Reason:
Command crashed: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'mkdir $TESTDIR/ceph_test_bluefs && cd $TESTDIR/ceph_test_bluefs && ceph_test_bluefs --log-file $TESTDIR/archive/ceph_test_bluefs.log --debug-bluefs 5/20 --gtest_catch_exceptions=0'" |
||||||||||||||
| pass | 4338 |
|
2026-03-31 11:18:49 | 2026-03-31 18:58:10 | 2026-03-31 19:04:54 | 0:06:44 | 0:04:50 | 0:01:54 | vps | uv2 | centos | 9.stream | rados/standalone/{supported-random-distro$/{centos_latest} workloads/crush} | 1 |
| pass | 4339 |
|
2026-03-31 11:18:49 | 2026-03-31 18:58:53 | 2026-03-31 19:25:45 | 0:26:52 | 0:21:41 | 0:05:11 | vps | uv2 | centos | 9.stream | rados/thrash-erasure-code/{ceph clusters/{fixed-4} ec_optimizations/ec_optimizations_off fast/fast mon_election/classic msgr-failures/osd-delay objectstore/{bluestore/{alloc$/{bitmap} base mem$/{normal-2} onode-segment$/{256K} write$/{v1/{compr$/{yes$/{snappy}} v1}}}} rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=3-m=1} | 4 |
| fail | 4340 |
|
2026-03-31 11:18:49 | 2026-03-31 19:03:44 | 2026-03-31 20:15:50 | 1:12:06 | 1:07:14 | 0:04:52 | vps | uv2 | ubuntu | 22.04 | rados/singleton/{all/ec-esb-fio mon_election/classic msgr-failures/few msgr/async-v1only objectstore/{bluestore/{alloc$/{avl} base mem$/{normal-1} onode-segment$/{512K} write$/{random/{compr$/{no$/{no}} random}}}} rados supported-random-distro$/{ubuntu_latest}} | 4 |
Failure Reason:
Command failed on vm03 with status 123: "time sudo find /var/log/ceph -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose --" |
||||||||||||||
| pass | 4341 |
|
2026-03-31 11:18:50 | 2026-03-31 19:07:46 | 2026-03-31 19:34:37 | 0:26:51 | 0:07:22 | 0:19:29 | vps | uv2 | ubuntu | 22.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-bitmap scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4M_rand_rw} | 1 |
| dead | 4342 |
|
2026-03-31 11:18:50 | 2026-03-31 19:26:36 | 2026-03-31 21:39:46 | 2:13:10 | vps | uv2 | ubuntu | 22.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-5} backoff/peering_and_degraded ceph clusters/{fixed-4} crc-failures/default d-balancer/upmap-read mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async objectstore/{bluestore/{alloc$/{avl} base mem$/{low} onode-segment$/{512K-onoff} write$/{v1/{compr$/{no$/{no}} v1}}}} rados supported-random-distro$/{ubuntu_latest} thrashers/default_host thrashosds-health workloads/radosbench-high-concurrency} | 4 | ||
Failure Reason:
hit max job timeout |
||||||||||||||
| pass | 4343 |
|
2026-03-31 11:18:51 | 2026-03-31 19:37:36 | 2026-03-31 20:31:46 | 0:54:10 | 0:13:43 | 0:40:27 | vps | uv2 | ubuntu | 22.04 | rados/cephadm/workunits/{0-distro/ubuntu_22.04 agent/off mon_election/classic task/test_mgmt_gateway} | 3 |
| pass | 4344 |
|
2026-03-31 11:18:51 | 2026-03-31 20:17:44 | 2026-03-31 20:42:42 | 0:24:58 | 0:23:47 | 0:01:11 | vps | uv2 | ubuntu | 22.04 | rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore/{alloc$/{avl} base mem$/{normal-2} onode-segment$/{1M} write$/{v2/{compr$/{yes$/{lz4}} v2}}} rados supported-random-distro$/{ubuntu_latest}} | 1 |
| fail | 4345 |
|
2026-03-31 11:18:51 | 2026-03-31 20:18:40 | 2026-03-31 20:39:35 | 0:20:55 | 0:04:51 | 0:16:04 | vps | uv2 | ubuntu | 22.04 | rados/mgr/{clusters/{2-node-mgr} debug/mgr distro/{ubuntu_latest} mgr_ttl_cache/enable mon_election/classic random-objectstore$/{bluestore/{alloc$/{stupid} base mem$/{low} onode-segment$/{256K} write$/{v1/{compr$/{yes$/{snappy}} v1}}}} tasks/{1-install 2-ceph 3-mgrmodules 4-units/module_selftest}} | 2 |
Failure Reason:
Test failure: test_devicehealth (tasks.mgr.test_module_selftest.TestModuleSelftest) |
||||||||||||||
| pass | 4346 |
|
2026-03-31 11:18:52 | 2026-03-31 20:33:34 | 2026-03-31 21:09:59 | 0:36:25 | 0:22:26 | 0:13:59 | vps | uv2 | centos | 9.stream | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4} ec_optimizations/ec_optimizations_off mon_election/classic msgr-failures/osd-delay objectstore/{bluestore/{alloc$/{btree} base mem$/{normal-1} onode-segment$/{none} write$/{v1/{compr$/{no$/{no}} v1}}}} rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 |
| pass | 4347 |
|
2026-03-31 11:18:52 | 2026-03-31 20:45:57 | 2026-03-31 21:03:38 | 0:17:41 | 0:05:17 | 0:12:24 | vps | uv2 | centos | 9.stream | rados/singleton-nomsgr/{all/ceph-snapmapper mon_election/classic rados supported-random-distro$/{centos_latest}} | 1 |
| pass | 4348 |
|
2026-03-31 11:18:53 | 2026-03-31 20:57:37 | 2026-03-31 21:30:16 | 0:32:39 | 0:30:22 | 0:02:17 | vps | uv2 | centos | 9.stream | rados/singleton/{all/ec-inconsistent-hinfo mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/{bluestore/{alloc$/{avl} base mem$/{normal-2} onode-segment$/{256K} write$/{random/{compr$/{no$/{no}} random}}}} rados supported-random-distro$/{centos_latest}} | 1 |
| pass | 4349 |
|
2026-03-31 11:18:53 | 2026-03-31 20:58:14 | 2026-03-31 21:21:05 | 0:22:51 | 0:08:23 | 0:14:28 | vps | uv2 | ubuntu | 22.04 | rados/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 |
| pass | 4350 |
|
2026-03-31 11:18:53 | 2026-03-31 21:11:04 | 2026-03-31 21:26:14 | 0:15:10 | 0:12:54 | 0:02:16 | vps | uv2 | ubuntu | 22.04 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/few msgr/async objectstore/{bluestore/{alloc$/{stupid} base mem$/{low} onode-segment$/{none} write$/{v2/{compr$/{yes$/{zlib}} v2}}}} rados supported-random-distro$/{ubuntu_latest} thrashers/sync-many workloads/rados_api_tests} | 2 |
| pass | 4351 |
|
2026-03-31 11:18:54 | 2026-03-31 21:12:13 | 2026-03-31 21:52:15 | 0:40:02 | 0:22:50 | 0:17:12 | vps | uv2 | ubuntu | 22.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-4} ec_optimizations/ec_optimizations_off fast/fast mon_election/classic msgr-failures/fastclose rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/fastread thrashosds-health workloads/ec-pool-snaps-few-objects-overwrites} | 4 |
| fail | 4352 |
|
2026-03-31 11:18:54 | 2026-03-31 21:28:13 | 2026-03-31 22:29:34 | 1:01:21 | 0:48:04 | 0:13:17 | vps | uv2 | centos | 9.stream | rados/thrash-old-clients/{0-distro$/{centos_9.stream} 0-size-min-size-overrides/2-size-2-min-size 1-install/tentacle backoff/peering_and_degraded ceph clusters/{three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay rados thrashers/mapgap thrashosds-health workloads/radosbench} | 3 |
Failure Reason:
Command failed on vm00 with status 125: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5bb3278730741031382ca9c3dc9d221a942e06a2 shell --fsid 7b709b18-2d4a-11f1-907b-8b3a535754cf -- ceph osd pool rm unique_pool_6 unique_pool_6 --yes-i-really-really-mean-it' |
||||||||||||||
| pass | 4353 |
|
2026-03-31 11:18:55 | 2026-03-31 21:39:31 | 2026-03-31 21:49:19 | 0:09:48 | 0:06:52 | 0:02:56 | vps | uv2 | centos | 9.stream | rados/thrash-erasure-code-big/{ceph cluster/{12-osds} ec_optimizations/ec_optimizations_off mon_election/classic msgr-failures/fastclose objectstore/{bluestore/{alloc$/{btree} base mem$/{normal-2} onode-segment$/{512K} write$/{random/{compr$/{no$/{no}} random}}}} rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_latest} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 |
| pass | 4354 |
|
2026-03-31 11:18:55 | 2026-03-31 21:41:18 | 2026-03-31 22:02:36 | 0:21:18 | 0:11:09 | 0:10:09 | vps | uv2 | centos | 9.stream | rados/basic/{ceph clusters/{fixed-2} mon_election/connectivity msgr-failures/many msgr/async objectstore/{bluestore/{alloc$/{hybrid} base mem$/{normal-2} onode-segment$/{512K-onoff} write$/{v1/{compr$/{yes$/{zstd}} v1}}}} rados supported-random-distro$/{centos_latest} tasks/rados_python} | 2 |
| pass | 4355 |
|
2026-03-31 11:18:55 | 2026-03-31 21:50:35 | 2026-03-31 22:01:47 | 0:11:12 | 0:07:31 | 0:03:41 | vps | uv2 | centos | 9.stream | rados/cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 2 |
| pass | 4356 |
|
2026-03-31 11:18:56 | 2026-03-31 21:53:45 | 2026-03-31 22:54:33 | 1:00:48 | 0:59:05 | 0:01:43 | vps | uv2 | ubuntu | 22.04 | rados/singleton/{all/ec-lost-unfound mon_election/classic msgr-failures/none msgr/async objectstore/{bluestore/{alloc$/{stupid} base mem$/{normal-1} onode-segment$/{none} write$/{v2/{compr$/{yes$/{snappy}} v2}}}} rados supported-random-distro$/{ubuntu_latest}} | 1 |
| pass | 4357 |
|
2026-03-31 11:18:56 | 2026-03-31 21:54:29 | 2026-03-31 22:03:15 | 0:08:46 | 0:06:14 | 0:02:32 | vps | uv2 | ubuntu | 22.04 | rados/singleton-nomsgr/{all/crushdiff mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 |
| pass | 4358 |
|
2026-03-31 11:18:57 | 2026-03-31 21:55:13 | 2026-03-31 23:00:09 | 1:04:56 | 0:52:40 | 0:12:16 | vps | uv2 | centos | 9.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-5} backoff/normal ceph clusters/{fixed-4} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v1only objectstore/{bluestore/{alloc$/{hybrid} base mem$/{normal-2} onode-segment$/{none} write$/{v2/{compr$/{yes$/{lz4}} v2}}}} rados supported-random-distro$/{centos_latest} thrashers/mapgap thrashosds-health workloads/radosbench} | 4 |
| pass | 4359 |
|
2026-03-31 11:18:57 | 2026-03-31 22:06:06 | 2026-03-31 22:13:42 | 0:07:36 | 0:05:22 | 0:02:14 | vps | uv2 | ubuntu | 22.04 | rados/multimon/{clusters/6 mon_election/classic msgr-failures/few msgr/async-v1only no_pools objectstore/{bluestore/{alloc$/{bitmap} base mem$/{low} onode-segment$/{none} write$/{v2/{compr$/{yes$/{lz4}} v2}}}} rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_no_skews} | 2 |
| pass | 4360 |
|
2026-03-31 11:18:57 | 2026-03-31 22:07:41 | 2026-03-31 23:03:23 | 0:55:42 | 0:31:49 | 0:23:53 | vps | uv2 | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_monitoring_stack_basic} | 3 |
| pass | 4361 |
|
2026-03-31 11:18:58 | 2026-03-31 22:31:20 | 2026-03-31 22:46:04 | 0:14:44 | 0:13:20 | 0:01:24 | vps | uv2 | centos | 9.stream | rados/singleton/{all/erasure-code-nonregression mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/{bluestore/{alloc$/{stupid} base mem$/{normal-2} onode-segment$/{1M} write$/{v1/{compr$/{yes$/{zstd}} v1}}}} rados supported-random-distro$/{centos_latest}} | 1 |
| dead | 4362 |
|
2026-03-31 11:18:58 | 2026-03-31 22:32:03 | 2026-03-31 23:14:49 | 0:42:46 | vps | uv2 | ubuntu | 22.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-comp scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/fio_4M_rand_write} | 1 | ||
| dead | 4363 |
|
2026-03-31 11:18:59 | 2026-03-31 22:32:46 | 2026-03-31 23:14:39 | 0:41:53 | vps | uv2 | ubuntu | 22.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-4} ec_optimizations/ec_optimizations_on mon_election/connectivity msgr-failures/few objectstore/{bluestore/{alloc$/{hybrid} base mem$/{normal-1} onode-segment$/{512K-onoff} write$/{v1/{compr$/{no$/{no}} v1}}}} rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default_host thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 4 | ||
| dead | 4364 |
|
2026-03-31 11:18:59 | 2026-03-31 23:02:37 | 2026-03-31 23:15:38 | 0:13:01 | vps | uv2 | ubuntu | 22.04 | rados/thrash-erasure-code-crush-4-nodes/{arch/x86_64 ceph clusters/{fixed-4} ec_optimizations/ec_optimizations_on mon_election/connectivity msgr-failures/few objectstore/{bluestore/{alloc$/{hybrid} base mem$/{normal-1} onode-segment$/{256K} write$/{v2/{compr$/{yes$/{lz4}} v2}}}} rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default_host thrashosds-health workloads/ec-rados-plugin=jerasure-k=8-m=6-crush} | 4 | ||
| pass | 4365 |
|
2026-03-31 11:18:59 | 2026-03-31 23:05:37 | 2026-03-31 23:25:49 | 0:20:12 | 0:08:20 | 0:11:52 | vps | uv2 | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 |