ID
Status
Ceph Branch
Suite Branch
Teuthology Branch
Machine
OS
Nodes
Description
Failure Reason
squid
tt-squid
clyso-debian-13
vps
centos 9.stream
rados:standalone/{supported-random-distro$/{centos_latest} workloads/c2c}
squid
tt-squid
clyso-debian-13
vps
centos 9.stream
rados:standalone/{supported-random-distro$/{centos_latest} workloads/crush}
squid
tt-squid
clyso-debian-13
vps
centos 9.stream
rados:standalone/{supported-random-distro$/{centos_latest} workloads/erasure-code}
squid
tt-squid
clyso-debian-13
vps
ubuntu 22.04
rados:standalone/{supported-random-distro$/{ubuntu_latest} workloads/mgr}
squid
tt-squid
clyso-debian-13
vps
centos 9.stream
rados:standalone/{supported-random-distro$/{centos_latest} workloads/misc}
Command failed (workunit test misc/mclock-config.sh) on vm07 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=569c3e99c9b32a51b4eaf08731c728f4513ed589 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/misc/mclock-config.sh'
squid
tt-squid
clyso-debian-13
vps
centos 9.stream
rados:standalone/{supported-random-distro$/{centos_latest} workloads/mon-stretch}
squid
tt-squid
clyso-debian-13
vps
ubuntu 22.04
rados:standalone/{supported-random-distro$/{ubuntu_latest} workloads/mon}
squid
tt-squid
clyso-debian-13
vps
ubuntu 22.04
rados:standalone/{supported-random-distro$/{ubuntu_latest} workloads/osd-backfill}
Command failed (workunit test osd-backfill/osd-backfill-recovery-log.sh) on vm06 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=569c3e99c9b32a51b4eaf08731c728f4513ed589 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/osd-backfill/osd-backfill-recovery-log.sh'
squid
tt-squid
clyso-debian-13
vps
ubuntu 22.04
rados:standalone/{supported-random-distro$/{ubuntu_latest} workloads/osd}
Command failed (workunit test osd/repeer-on-acting-back.sh) on vm09 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=569c3e99c9b32a51b4eaf08731c728f4513ed589 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/repeer-on-acting-back.sh'
squid
tt-squid
clyso-debian-13
vps
ubuntu 22.04
rados:standalone/{supported-random-distro$/{ubuntu_latest} workloads/scrub}