| User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
|---|---|---|---|---|---|---|---|---|---|---|
| kyr | 2026-03-08 21:49:43 | 2026-03-08 22:32:45 | 2026-03-08 23:53:00 | 1:20:15 | rados:standalone | squid | vps | e911bde | 7 | 3 |
| Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| pass | 274 |
|
2026-03-08 21:49:45 | 2026-03-08 22:32:45 | 2026-03-08 22:41:35 | 0:08:50 | 0:06:29 | 0:02:21 | vps | clyso-debian-13 | centos | 9.stream | rados:standalone/{supported-random-distro$/{centos_latest} workloads/c2c} | 1 |
| pass | 275 |
|
2026-03-08 21:49:45 | 2026-03-08 22:33:34 | 2026-03-08 22:59:36 | 0:26:02 | 0:22:16 | 0:03:46 | vps | clyso-debian-13 | centos | 9.stream | rados:standalone/{supported-random-distro$/{centos_latest} workloads/crush} | 1 |
| pass | 276 |
|
2026-03-08 21:49:46 | 2026-03-08 22:35:35 | 2026-03-08 23:04:13 | 0:28:38 | 0:26:17 | 0:02:21 | vps | clyso-debian-13 | centos | 9.stream | rados:standalone/{supported-random-distro$/{centos_latest} workloads/erasure-code} | 1 |
| pass | 277 |
|
2026-03-08 21:49:46 | 2026-03-08 22:36:11 | 2026-03-08 22:44:55 | 0:08:44 | 0:07:42 | 0:01:02 | vps | clyso-debian-13 | ubuntu | 22.04 | rados:standalone/{supported-random-distro$/{ubuntu_latest} workloads/mgr} | 1 |
| fail | 278 |
|
2026-03-08 21:49:46 | 2026-03-08 22:36:55 | 2026-03-08 22:43:38 | 0:06:43 | 0:05:30 | 0:01:13 | vps | clyso-debian-13 | centos | 9.stream | rados:standalone/{supported-random-distro$/{centos_latest} workloads/misc} | 1 |
Failure Reason:
Command failed (workunit test misc/mclock-config.sh) on vm07 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=569c3e99c9b32a51b4eaf08731c728f4513ed589 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/misc/mclock-config.sh' |
||||||||||||||
| pass | 279 |
|
2026-03-08 21:49:47 | 2026-03-08 22:37:37 | 2026-03-08 22:50:16 | 0:12:39 | 0:09:59 | 0:02:40 | vps | clyso-debian-13 | centos | 9.stream | rados:standalone/{supported-random-distro$/{centos_latest} workloads/mon-stretch} | 1 |
| pass | 280 |
|
2026-03-08 21:49:47 | 2026-03-08 22:38:15 | 2026-03-08 23:10:54 | 0:32:39 | 0:31:04 | 0:01:35 | vps | clyso-debian-13 | ubuntu | 22.04 | rados:standalone/{supported-random-distro$/{ubuntu_latest} workloads/mon} | 1 |
| fail | 281 |
|
2026-03-08 21:49:48 | 2026-03-08 22:38:52 | 2026-03-08 22:49:36 | 0:10:44 | 0:09:18 | 0:01:26 | vps | clyso-debian-13 | ubuntu | 22.04 | rados:standalone/{supported-random-distro$/{ubuntu_latest} workloads/osd-backfill} | 1 |
Failure Reason:
Command failed (workunit test osd-backfill/osd-backfill-recovery-log.sh) on vm06 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=569c3e99c9b32a51b4eaf08731c728f4513ed589 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/osd-backfill/osd-backfill-recovery-log.sh' |
||||||||||||||
| fail | 282 |
|
2026-03-08 21:49:48 | 2026-03-08 22:39:35 | 2026-03-08 23:40:16 | 1:00:41 | 0:59:22 | 0:01:19 | vps | clyso-debian-13 | ubuntu | 22.04 | rados:standalone/{supported-random-distro$/{ubuntu_latest} workloads/osd} | 1 |
Failure Reason:
Command failed (workunit test osd/repeer-on-acting-back.sh) on vm09 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=569c3e99c9b32a51b4eaf08731c728f4513ed589 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/repeer-on-acting-back.sh' |
||||||||||||||
| pass | 283 |
|
2026-03-08 21:49:48 | 2026-03-08 22:40:13 | 2026-03-08 23:53:00 | 1:12:47 | 1:11:40 | 0:01:07 | vps | clyso-debian-13 | ubuntu | 22.04 | rados:standalone/{supported-random-distro$/{ubuntu_latest} workloads/scrub} | 1 |