Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 286 2026-03-08 22:22:46 2026-03-08 22:40:57 2026-03-08 22:50:12 0:09:15 0:07:50 0:01:25 vps clyso-debian-13 centos 9.stream orch:cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
Failure Reason:

"2026-03-08T22:47:11.506052+0000 mon.vm08 (mon.0) 494 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log

fail 287 2026-03-08 22:22:47 2026-03-08 22:42:11 2026-03-08 22:48:48 0:06:37 0:02:35 0:04:02 vps clyso-debian-13 ubuntu 22.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
Failure Reason:

Command failed on vm02 with status 1: "grep '^nvme_loop' /proc/modules || sudo modprobe nvme_loop && sudo mkdir -p /sys/kernel/config/nvmet/hosts/hostnqn && sudo mkdir -p /sys/kernel/config/nvmet/ports/1 && echo loop | sudo tee /sys/kernel/config/nvmet/ports/1/addr_trtype"

pass 288 2026-03-08 22:22:47 2026-03-08 22:44:47 2026-03-08 22:59:10 0:14:23 0:07:23 0:07:00 vps clyso-debian-13 centos 9.stream orch:cephadm/smoke/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/classic start} 2
pass 289 2026-03-08 22:22:48 2026-03-08 22:51:09 2026-03-08 23:14:15 0:23:06 0:21:20 0:01:46 vps clyso-debian-13 ubuntu 22.04 orch:cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rotate-keys} 2
fail 290 2026-03-08 22:22:48 2026-03-08 22:52:13 2026-03-08 22:57:35 0:05:22 0:03:28 0:01:54 vps clyso-debian-13 centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no kernel overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

"grep: /var/log/ceph/178106ac-1b42-11f1-8095-5f48055c15ba/ceph.log: No such file or directory" in cluster log

dead 291 2026-03-08 22:22:48 2026-03-08 22:53:34 2026-03-09 00:33:04 1:39:30 vps clyso-debian-13 centos 9.stream orch:cephadm/mgr-nfs-upgrade/{0-centos_9.stream 1-bootstrap/17.2.0 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
pass 292 2026-03-08 22:22:49 2026-03-08 22:58:59 2026-03-08 23:37:46 0:38:47 0:36:44 0:02:03 vps clyso-debian-13 ubuntu 22.04 orch:cephadm/nfs/{cluster/{1-node} conf/{client mds mgr mon osd} overrides/{ignore_mgr_down ignorelist_health pg_health} supported-random-distros$/{ubuntu_latest} tasks/nfs} 1
pass 293 2026-03-08 22:22:49 2026-03-08 22:59:44 2026-03-08 23:08:27 0:08:43 0:07:35 0:01:08 vps clyso-debian-13 centos 9.stream orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream mon_election/classic task/test_orch_cli} 1
pass 294 2026-03-08 22:22:50 2026-03-08 23:00:26 2026-03-08 23:09:37 0:09:11 0:07:25 0:01:46 vps clyso-debian-13 ubuntu 22.04 orch:cephadm/orchestrator_cli/{0-random-distro$/{ubuntu_22.04} 2-node-mgr agent/off orchestrator_cli} 2
fail 295 2026-03-08 22:22:50 2026-03-08 23:01:36 2026-03-08 23:29:10 0:27:34 0:16:28 0:11:06 vps clyso-debian-13 ubuntu 22.04 orch:cephadm/rbd_iscsi/{base/install cluster/{fixed-3 openstack} conf/{disable-pool-app} supported-container-hosts$/{ubuntu_22.04} workloads/cephadm_iscsi} 3
Failure Reason:

Command failed on vm02 with status 1: 'CEPH_REF=master CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /home/ubuntu/cephtest/virtualenv/bin/cram -v -- /home/ubuntu/cephtest/archive/cram.client.0/*.t'

pass 296 2026-03-08 22:22:50 2026-03-08 23:11:08 2026-03-08 23:18:23 0:07:15 0:05:54 0:01:21 vps clyso-debian-13 centos 9.stream orch:cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_basic} 2
fail 297 2026-03-08 22:22:51 2026-03-08 23:12:22 2026-03-08 23:24:32 0:12:10 0:07:15 0:04:55 vps clyso-debian-13 centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
Failure Reason:

"2026-03-08T23:21:03.742359+0000 mon.vm06 (mon.0) 492 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log

pass 298 2026-03-08 22:22:51 2026-03-08 23:16:31 2026-03-08 23:25:08 0:08:37 0:05:12 0:03:25 vps clyso-debian-13 centos 9.stream orch:cephadm/smoke-singlehost/{0-random-distro$/{centos_9.stream} 1-start 2-services/basic 3-final} 1
pass 299 2026-03-08 22:22:51 2026-03-08 23:19:08 2026-03-08 23:34:09 0:15:01 0:06:12 0:08:49 vps clyso-debian-13 centos 9.stream orch:cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/classic start} 3
pass 300 2026-03-08 22:22:52 2026-03-08 23:26:08 2026-03-08 23:40:31 0:14:23 0:09:37 0:04:46 vps clyso-debian-13 ubuntu 22.04 orch:cephadm/workunits/{0-distro/ubuntu_22.04 agent/off mon_election/classic task/test_extra_daemon_features} 2
fail 301 2026-03-08 22:22:52 2026-03-08 23:30:30 2026-03-08 23:49:45 0:19:15 0:17:39 0:01:36 vps clyso-debian-13 centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
Failure Reason:

"2026-03-08T23:46:25.629198+0000 mon.vm04 (mon.0) 496 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log

fail 302 2026-03-08 22:22:53 2026-03-08 23:31:44 2026-03-08 23:39:17 0:07:33 0:02:06 0:05:27 vps clyso-debian-13 ubuntu 22.04 orch:cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
Failure Reason:

Command failed on vm01 with status 1: "grep '^nvme_loop' /proc/modules || sudo modprobe nvme_loop && sudo mkdir -p /sys/kernel/config/nvmet/hosts/hostnqn && sudo mkdir -p /sys/kernel/config/nvmet/ports/1 && echo loop | sudo tee /sys/kernel/config/nvmet/ports/1/addr_trtype"

pass 303 2026-03-08 22:22:53 2026-03-08 23:35:17 2026-03-09 00:05:10 0:29:53 0:23:58 0:05:55 vps clyso-debian-13 centos 9.stream orch:cephadm/thrash/{0-distro/centos_9.stream 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v1only root} 2
fail 304 2026-03-08 22:22:53 2026-03-08 23:39:08 2026-03-08 23:44:24 0:05:16 0:02:10 0:03:06 vps clyso-debian-13 ubuntu 22.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-keepalive-only 3-final} 2
Failure Reason:

Command failed on vm01 with status 1: "grep '^nvme_loop' /proc/modules || sudo modprobe nvme_loop && sudo mkdir -p /sys/kernel/config/nvmet/hosts/hostnqn && sudo mkdir -p /sys/kernel/config/nvmet/ports/1 && echo loop | sudo tee /sys/kernel/config/nvmet/ports/1/addr_trtype"

pass 305 2026-03-08 22:22:54 2026-03-08 23:40:24 2026-03-09 00:02:21 0:21:57 0:19:01 0:02:56 vps clyso-debian-13 centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_host_drain} 3
fail 306 2026-03-08 22:22:54 2026-03-08 23:42:19 2026-03-08 23:53:40 0:11:21 0:07:42 0:03:39 vps clyso-debian-13 centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs 3-final} 2
Failure Reason:

"2026-03-08T23:50:37.353868+0000 mon.vm01 (mon.0) 500 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log

pass 307 2026-03-08 22:22:54 2026-03-08 23:45:39 2026-03-09 00:25:27 0:39:48 0:33:36 0:06:12 vps clyso-debian-13 centos 9.stream orch:cephadm/upgrade/{1-start-distro/1-start-centos_9.stream 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/classic} 2
fail 308 2026-03-08 22:22:55 2026-03-08 23:51:25 2026-03-09 00:17:00 0:25:35 0:20:38 0:04:57 vps clyso-debian-13 centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes kernel overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.1} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

"2026-03-09T00:10:00.000194+0000 mon.vm03 (mon.0) 518 : cluster [WRN] osd.3 (root=default,host=vm06) is down" in cluster log

pass 309 2026-03-08 22:22:55 2026-03-08 23:54:59 2026-03-09 00:21:50 0:26:51 0:13:00 0:13:51 vps clyso-debian-13 centos 9.stream orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream_runc mon_election/connectivity task/test_orch_cli_mon} 5
pass 310 2026-03-08 22:22:56 2026-03-09 00:07:48 2026-03-09 00:26:17 0:18:29 0:07:30 0:10:59 vps clyso-debian-13 centos 9.stream orch:cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_domain} 2
fail 311 2026-03-08 22:22:56 2026-03-09 00:18:17 2026-03-09 00:32:58 0:14:41 0:08:08 0:06:33 vps clyso-debian-13 centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs2 3-final} 2
Failure Reason:

"2026-03-09T00:28:04.785200+0000 mon.vm02 (mon.0) 495 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log

dead 312 2026-03-08 22:22:56 2026-03-09 00:22:57 2026-03-09 00:32:37 0:09:40 vps clyso-debian-13 centos 9.stream orch:cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} 2
dead 313 2026-03-08 22:22:57 2026-03-09 00:24:36 2026-03-09 00:33:20 0:08:44 vps clyso-debian-13 centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_iscsi_container/{centos_9.stream test_iscsi_container}} 1
dead 314 2026-03-08 22:22:57 2026-03-09 00:25:19 2026-03-09 00:32:57 0:07:38 vps clyso-debian-13 centos 9.stream orch:cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
fail 315 2026-03-08 22:22:57 2026-03-09 00:26:57 2026-03-09 00:32:13 0:05:16 0:02:37 0:02:39 vps clyso-debian-13 ubuntu 22.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nvmeof 3-final} 2
Failure Reason:

Command failed on vm03 with status 1: "grep '^nvme_loop' /proc/modules || sudo modprobe nvme_loop && sudo mkdir -p /sys/kernel/config/nvmet/hosts/hostnqn && sudo mkdir -p /sys/kernel/config/nvmet/ports/1 && echo loop | sudo tee /sys/kernel/config/nvmet/ports/1/addr_trtype"

dead 316 2026-03-08 22:22:58 2026-03-09 00:28:12 2026-03-09 00:35:23 0:07:11 vps clyso-debian-13 ubuntu 22.04 orch:cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2