| User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
|---|---|---|---|---|---|---|---|---|---|---|---|
| kyr | 2026-03-09 11:23:05 | 2026-03-09 16:33:20 | 2026-03-09 22:46:10 | 6:12:50 | orch | squid | vps | e911bde | 75 | 73 | 30 |
| Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| fail | 492 |
|
2026-03-09 11:23:07 | 2026-03-09 13:20:08 | 2026-03-09 13:38:06 | 0:17:58 | 0:08:51 | 0:09:07 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_iscsi_container/{centos_9.stream test_iscsi_container}} | 1 |
Failure Reason:
Command failed (workunit test cephadm/test_iscsi_pids_limit.sh) on vm04 with status 125: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=569c3e99c9b32a51b4eaf08731c728f4513ed589 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_iscsi_pids_limit.sh' |
||||||||||||||
| pass | 493 |
|
2026-03-09 11:23:08 | 2026-03-09 13:28:05 | 2026-03-09 14:04:18 | 0:36:13 | 0:32:07 | 0:04:06 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/thrash/{0-distro/ubuntu_22.04 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async root} | 2 |
| fail | 494 |
|
2026-03-09 11:23:08 | 2026-03-09 13:30:15 | 2026-03-09 13:41:49 | 0:11:34 | 0:02:33 | 0:09:01 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/rgw 3-final} | 2 |
Failure Reason:
Command failed on vm08 with status 1: "grep '^nvme_loop' /proc/modules || sudo modprobe nvme_loop && sudo mkdir -p /sys/kernel/config/nvmet/hosts/hostnqn && sudo mkdir -p /sys/kernel/config/nvmet/ports/1 && echo loop | sudo tee /sys/kernel/config/nvmet/ports/1/addr_trtype" |
||||||||||||||
| pass | 495 |
|
2026-03-09 11:23:09 | 2026-03-09 13:37:48 | 2026-03-09 14:09:27 | 0:31:39 | 0:29:50 | 0:01:49 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_python} | 2 |
| fail | 496 |
|
2026-03-09 11:23:09 | 2026-03-09 13:39:25 | 2026-03-09 13:49:12 | 0:09:47 | 0:05:12 | 0:04:35 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no kernel overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 |
Failure Reason:
"grep: /var/log/ceph/9ab98392-1bbe-11f1-ac17-d54091389ff6/ceph.log: No such file or directory" in cluster log |
||||||||||||||
| dead | 497 |
|
2026-03-09 11:23:09 | 2026-03-09 13:43:11 | 2026-03-09 15:53:03 | 2:09:52 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/mgr-nfs-upgrade/{0-centos_9.stream 1-bootstrap/17.2.0 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | ||
Failure Reason:
hit max job timeout |
||||||||||||||
| pass | 498 |
|
2026-03-09 11:23:10 | 2026-03-09 13:50:55 | 2026-03-09 14:36:42 | 0:45:47 | 0:35:33 | 0:10:14 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/nfs/{cluster/{1-node} conf/{client mds mgr mon osd} overrides/{ignore_mgr_down ignorelist_health pg_health} supported-random-distros$/{ubuntu_latest} tasks/nfs} | 1 |
| pass | 499 |
|
2026-03-09 11:23:10 | 2026-03-09 14:00:40 | 2026-03-09 14:07:24 | 0:06:44 | 0:04:50 | 0:01:54 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/no-agent-workunits/{0-distro/centos_9.stream mon_election/connectivity task/test_adoption} | 1 |
| pass | 500 |
|
2026-03-09 11:23:11 | 2026-03-09 14:01:24 | 2026-03-09 14:13:24 | 0:12:00 | 0:08:23 | 0:03:37 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/orchestrator_cli/{0-random-distro$/{centos_9.stream_runc} 2-node-mgr agent/off orchestrator_cli} | 2 |
| fail | 501 |
|
2026-03-09 11:23:11 | 2026-03-09 14:03:23 | 2026-03-09 14:13:35 | 0:10:12 | 0:07:42 | 0:02:30 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/repave-all} | 2 |
Failure Reason:
"2026-03-09T14:10:33.096095+0000 mon.vm02 (mon.0) 495 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log |
||||||||||||||
| fail | 502 |
|
2026-03-09 11:23:12 | 2026-03-09 14:05:34 | 2026-03-09 14:31:15 | 0:25:41 | 0:17:58 | 0:07:43 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/rbd_iscsi/{base/install cluster/{fixed-3 openstack} conf/{disable-pool-app} supported-container-hosts$/{ubuntu_22.04} workloads/cephadm_iscsi} | 3 |
Failure Reason:
Command failed on vm03 with status 1: 'CEPH_REF=master CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /home/ubuntu/cephtest/virtualenv/bin/cram -v -- /home/ubuntu/cephtest/archive/cram.client.0/*.t' |
||||||||||||||
| pass | 503 |
|
2026-03-09 11:23:12 | 2026-03-09 14:11:14 | 2026-03-09 14:19:49 | 0:08:35 | 0:05:21 | 0:03:14 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_basic} | 2 |
| fail | 504 |
|
2026-03-09 11:23:13 | 2026-03-09 14:13:48 | 2026-03-09 14:22:57 | 0:09:09 | 0:07:29 | 0:01:40 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/basic 3-final} | 2 |
Failure Reason:
"2026-03-09T14:19:46.327274+0000 mon.vm07 (mon.0) 489 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log |
||||||||||||||
| pass | 505 |
|
2026-03-09 11:23:14 | 2026-03-09 14:14:56 | 2026-03-09 14:21:40 | 0:06:44 | 0:05:06 | 0:01:38 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smoke-singlehost/{0-random-distro$/{centos_9.stream_runc} 1-start 2-services/basic 3-final} | 1 |
| pass | 506 |
|
2026-03-09 11:23:14 | 2026-03-09 14:15:39 | 2026-03-09 14:29:50 | 0:14:11 | 0:06:39 | 0:07:32 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 3 |
| fail | 507 |
|
2026-03-09 11:23:15 | 2026-03-09 14:21:50 | 2026-03-09 14:46:15 | 0:24:25 | 0:21:48 | 0:02:37 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/upgrade/{1-start-distro/1-start-ubuntu_22.04 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls agent/off mon_election/connectivity} | 2 |
Failure Reason:
Command failed on vm07 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 --force' |
||||||||||||||
| pass | 508 |
|
2026-03-09 11:23:15 | 2026-03-09 14:24:14 | 2026-03-09 14:45:39 | 0:21:25 | 0:12:59 | 0:08:26 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_monitoring_stack_basic} | 3 |
| fail | 509 |
|
2026-03-09 11:23:16 | 2026-03-09 14:31:38 | 2026-03-09 14:52:53 | 0:21:15 | 0:18:00 | 0:03:15 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/client-keyring 3-final} | 2 |
Failure Reason:
"2026-03-09T14:47:56.006771+0000 mon.vm03 (mon.0) 494 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log |
||||||||||||||
| fail | 510 |
|
2026-03-09 11:23:16 | 2026-03-09 14:32:52 | 2026-03-09 14:38:08 | 0:05:16 | 0:02:34 | 0:02:42 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/iscsi 3-final} | 2 |
Failure Reason:
Command failed on vm05 with status 1: "grep '^nvme_loop' /proc/modules || sudo modprobe nvme_loop && sudo mkdir -p /sys/kernel/config/nvmet/hosts/hostnqn && sudo mkdir -p /sys/kernel/config/nvmet/ports/1 && echo loop | sudo tee /sys/kernel/config/nvmet/ports/1/addr_trtype" |
||||||||||||||
| pass | 511 |
|
2026-03-09 11:23:17 | 2026-03-09 14:34:07 | 2026-03-09 14:49:46 | 0:15:39 | 0:09:53 | 0:05:46 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_rgw_multisite} | 3 |
| fail | 512 |
|
2026-03-09 11:23:17 | 2026-03-09 14:39:45 | 2026-03-09 14:55:04 | 0:15:19 | 0:07:27 | 0:07:52 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/jaeger 3-final} | 2 |
Failure Reason:
"2026-03-09T14:51:45.815629+0000 mon.vm02 (mon.0) 496 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log |
||||||||||||||
| fail | 513 |
|
2026-03-09 11:23:17 | 2026-03-09 14:47:03 | 2026-03-09 14:58:18 | 0:11:15 | 0:08:10 | 0:03:05 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 |
Failure Reason:
"2026-03-09T14:53:29.772429+0000 mon.vm01 (mon.0) 496 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log |
||||||||||||||
| fail | 514 |
|
2026-03-09 11:23:18 | 2026-03-09 14:48:17 | 2026-03-09 15:17:04 | 0:28:47 | 0:25:21 | 0:03:26 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes kernel overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.0} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 |
Failure Reason:
reached maximum tries (50) after waiting for 300 seconds |
||||||||||||||
| pass | 515 |
|
2026-03-09 11:23:18 | 2026-03-09 14:51:02 | 2026-03-09 15:01:46 | 0:10:44 | 0:09:15 | 0:01:29 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/no-agent-workunits/{0-distro/centos_9.stream_runc mon_election/classic task/test_cephadm_timeout} | 1 |
| pass | 516 |
|
2026-03-09 11:23:19 | 2026-03-09 14:51:45 | 2026-03-09 15:04:08 | 0:12:23 | 0:08:21 | 0:04:02 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_domain} | 2 |
| fail | 517 |
|
2026-03-09 11:23:19 | 2026-03-09 14:54:08 | 2026-03-09 15:06:21 | 0:12:13 | 0:08:32 | 0:03:41 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/mirror 3-final} | 2 |
Failure Reason:
"2026-03-09T15:01:57.356795+0000 mon.vm02 (mon.0) 497 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log |
||||||||||||||
| pass | 518 |
|
2026-03-09 11:23:20 | 2026-03-09 14:56:20 | 2026-03-09 15:17:43 | 0:21:23 | 0:17:10 | 0:04:13 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rotate-keys} | 2 |
| pass | 519 |
|
2026-03-09 11:23:20 | 2026-03-09 14:59:41 | 2026-03-09 15:17:08 | 0:17:27 | 0:12:40 | 0:04:47 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 2 |
| pass | 520 |
|
2026-03-09 11:23:21 | 2026-03-09 15:03:07 | 2026-03-09 15:18:09 | 0:15:02 | 0:09:35 | 0:05:27 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_set_mon_crush_locations} | 3 |
| fail | 521 |
|
2026-03-09 11:23:21 | 2026-03-09 15:08:08 | 2026-03-09 15:22:19 | 0:14:11 | 0:02:01 | 0:12:10 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-haproxy-proto 3-final} | 2 |
Failure Reason:
Command failed on vm05 with status 1: "grep '^nvme_loop' /proc/modules || sudo modprobe nvme_loop && sudo mkdir -p /sys/kernel/config/nvmet/hosts/hostnqn && sudo mkdir -p /sys/kernel/config/nvmet/ports/1 && echo loop | sudo tee /sys/kernel/config/nvmet/ports/1/addr_trtype" |
||||||||||||||
| fail | 522 |
|
2026-03-09 11:23:22 | 2026-03-09 15:18:18 | 2026-03-09 15:29:33 | 0:11:15 | 0:08:00 | 0:03:15 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} | 2 |
Failure Reason:
"2026-03-09T15:24:51.006382+0000 mon.vm00 (mon.0) 493 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log |
||||||||||||||
| pass | 523 |
|
2026-03-09 11:23:22 | 2026-03-09 15:19:32 | 2026-03-09 15:29:18 | 0:09:46 | 0:06:42 | 0:03:04 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 3 |
| fail | 524 |
|
2026-03-09 11:23:23 | 2026-03-09 15:21:18 | 2026-03-09 15:26:34 | 0:05:16 | 0:02:37 | 0:02:39 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 |
Failure Reason:
Command failed on vm03 with status 1: "grep '^nvme_loop' /proc/modules || sudo modprobe nvme_loop && sudo mkdir -p /sys/kernel/config/nvmet/hosts/hostnqn && sudo mkdir -p /sys/kernel/config/nvmet/ports/1 && echo loop | sudo tee /sys/kernel/config/nvmet/ports/1/addr_trtype" |
||||||||||||||
| pass | 525 |
|
2026-03-09 11:23:23 | 2026-03-09 15:22:33 | 2026-03-09 15:41:49 | 0:19:16 | 0:17:39 | 0:01:37 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/upgrade/{1-start-distro/1-start-centos_9.stream 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls agent/off mon_election/classic} | 2 |
| pass | 526 |
|
2026-03-09 11:23:24 | 2026-03-09 15:23:48 | 2026-03-09 15:35:53 | 0:12:05 | 0:06:28 | 0:05:37 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_ca_signed_key} | 2 |
| fail | 527 |
|
2026-03-09 11:23:24 | 2026-03-09 15:27:52 | 2026-03-09 15:40:38 | 0:12:46 | 0:08:01 | 0:04:45 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} | 2 |
Failure Reason:
"2026-03-09T15:35:43.199975+0000 mon.vm01 (mon.0) 491 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log |
||||||||||||||
| pass | 528 |
|
2026-03-09 11:23:25 | 2026-03-09 15:30:37 | 2026-03-09 16:03:54 | 0:33:17 | 0:30:48 | 0:02:29 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/thrash/{0-distro/centos_9.stream 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} | 2 |
| fail | 529 |
|
2026-03-09 11:23:25 | 2026-03-09 15:31:51 | 2026-03-09 15:39:07 | 0:07:16 | 0:04:15 | 0:03:01 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no kernel overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 |
Failure Reason:
"grep: /var/log/ceph/d57bed62-1bcd-11f1-8759-5985d92d84f3/ceph.log: No such file or directory" in cluster log |
||||||||||||||
| pass | 530 |
|
2026-03-09 11:23:26 | 2026-03-09 15:33:06 | 2026-03-09 15:50:33 | 0:17:27 | 0:12:33 | 0:04:54 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/no-agent-workunits/{0-distro/ubuntu_22.04 mon_election/connectivity task/test_orch_cli} | 1 |
| pass | 531 |
|
2026-03-09 11:23:26 | 2026-03-09 15:36:32 | 2026-03-09 15:50:14 | 0:13:42 | 0:09:28 | 0:04:14 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/smb/{0-distro/ubuntu_22.04 tasks/deploy_smb_basic} | 2 |
| fail | 532 |
|
2026-03-09 11:23:27 | 2026-03-09 15:40:12 | 2026-03-09 15:46:03 | 0:05:51 | 0:02:40 | 0:03:11 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} | 2 |
Failure Reason:
Command failed on vm04 with status 1: "grep '^nvme_loop' /proc/modules || sudo modprobe nvme_loop && sudo mkdir -p /sys/kernel/config/nvmet/hosts/hostnqn && sudo mkdir -p /sys/kernel/config/nvmet/ports/1 && echo loop | sudo tee /sys/kernel/config/nvmet/ports/1/addr_trtype" |
||||||||||||||
| pass | 533 |
|
2026-03-09 11:23:27 | 2026-03-09 15:42:02 | 2026-03-09 16:33:15 | 0:51:13 | 0:48:58 | 0:02:15 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} | 2 |
| fail | 534 |
|
2026-03-09 11:23:28 | 2026-03-09 15:43:12 | 2026-03-09 15:49:56 | 0:06:44 | 0:05:22 | 0:01:22 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_cephadm} | 1 |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on vm05 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=569c3e99c9b32a51b4eaf08731c728f4513ed589 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
| fail | 535 |
|
2026-03-09 11:23:28 | 2026-03-09 15:43:55 | 2026-03-09 15:55:26 | 0:11:31 | 0:07:43 | 0:03:48 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} | 2 |
Failure Reason:
"2026-03-09T15:52:18.657901+0000 mon.vm04 (mon.0) 492 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log |
||||||||||||||
| fail | 536 |
|
2026-03-09 11:23:29 | 2026-03-09 15:47:26 | 2026-03-09 15:59:31 | 0:12:05 | 0:07:42 | 0:04:23 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 |
Failure Reason:
"2026-03-09T15:56:26.326654+0000 mon.vm02 (mon.0) 497 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log |
||||||||||||||
| fail | 537 |
|
2026-03-09 11:23:29 | 2026-03-09 15:51:30 | 2026-03-09 16:02:46 | 0:11:16 | 0:08:57 | 0:02:19 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-keepalive-only 3-final} | 2 |
Failure Reason:
"2026-03-09T15:58:45.368686+0000 mon.vm03 (mon.0) 491 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log |
||||||||||||||
| pass | 538 |
|
2026-03-09 11:23:30 | 2026-03-09 15:52:45 | 2026-03-09 16:02:00 | 0:09:15 | 0:07:47 | 0:01:28 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smoke/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 |
| pass | 539 |
|
2026-03-09 11:23:30 | 2026-03-09 15:53:59 | 2026-03-09 16:04:17 | 0:10:18 | 0:06:00 | 0:04:18 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_cephadm_repos} | 1 |
| fail | 540 |
|
2026-03-09 11:23:31 | 2026-03-09 15:56:16 | 2026-03-09 16:04:51 | 0:08:35 | 0:02:38 | 0:05:57 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs 3-final} | 2 |
Failure Reason:
Command failed on vm07 with status 1: "grep '^nvme_loop' /proc/modules || sudo modprobe nvme_loop && sudo mkdir -p /sys/kernel/config/nvmet/hosts/hostnqn && sudo mkdir -p /sys/kernel/config/nvmet/ports/1 && echo loop | sudo tee /sys/kernel/config/nvmet/ports/1/addr_trtype" |
||||||||||||||
| pass | 541 |
|
2026-03-09 11:23:31 | 2026-03-09 16:00:50 | 2026-03-09 16:25:19 | 0:24:29 | 0:20:56 | 0:03:33 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/thrash/{0-distro/centos_9.stream_runc 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async root} | 2 |
| fail | 542 |
|
2026-03-09 11:23:32 | 2026-03-09 16:03:18 | 2026-03-09 16:30:52 | 0:27:34 | 0:24:15 | 0:03:19 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes kernel overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{reef} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 |
Failure Reason:
reached maximum tries (50) after waiting for 300 seconds |
||||||||||||||
| pass | 543 |
|
2026-03-09 11:23:32 | 2026-03-09 16:04:50 | 2026-03-09 16:21:46 | 0:16:56 | 0:13:21 | 0:03:35 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/no-agent-workunits/{0-distro/centos_9.stream mon_election/classic task/test_orch_cli_mon} | 5 |
| pass | 544 |
|
2026-03-09 11:23:33 | 2026-03-09 16:07:45 | 2026-03-09 16:30:56 | 0:23:11 | 0:06:54 | 0:16:17 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/orchestrator_cli/{0-random-distro$/{ubuntu_22.04} 2-node-mgr agent/on orchestrator_cli} | 2 |
| dead | 545 |
|
2026-03-09 11:23:33 | 2026-03-09 16:22:55 | 2026-03-09 16:26:10 | 0:03:15 | 0:00:42 | 0:02:33 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_domain} | 2 |
Failure Reason:
vm02.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: []vm11.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: [] |
||||||||||||||
| dead | 546 |
|
2026-03-09 11:23:34 | 2026-03-09 16:24:10 | 2026-03-09 16:27:13 | 0:03:03 | 0:01:30 | 0:01:33 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs2 3-final} | 2 |
Failure Reason:
vm06.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: []vm04.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: [] |
||||||||||||||
| dead | 547 |
|
2026-03-09 11:23:34 | 2026-03-09 16:25:12 | 2026-03-09 16:28:08 | 0:02:56 | 0:00:42 | 0:02:14 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smoke-singlehost/{0-random-distro$/{centos_9.stream} 1-start 2-services/rgw 3-final} | 1 |
Failure Reason:
vm08.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: [] |
||||||||||||||
| dead | 548 |
|
2026-03-09 11:23:34 | 2026-03-09 16:26:07 | 2026-03-09 16:29:47 | 0:03:40 | 0:00:46 | 0:02:54 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 3 |
Failure Reason:
vm11.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: []vm02.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: []vm10.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: [] |
||||||||||||||
| dead | 549 |
|
2026-03-09 11:23:35 | 2026-03-09 16:27:46 | 2026-03-09 16:30:49 | 0:03:03 | 0:00:44 | 0:02:19 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 |
Failure Reason:
vm04.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: []vm06.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: [] |
||||||||||||||
| dead | 550 |
|
2026-03-09 11:23:35 | 2026-03-09 16:28:48 | 2026-03-09 16:33:04 | 0:04:16 | 0:00:41 | 0:03:35 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_extra_daemon_features} | 2 |
Failure Reason:
vm08.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: []vm02.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: [] |
||||||||||||||
| dead | 551 |
|
2026-03-09 11:23:36 | 2026-03-09 16:31:04 | 2026-03-09 16:34:06 | 0:03:02 | 0:00:42 | 0:02:20 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nvmeof 3-final} | 2 |
Failure Reason:
vm06.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: []vm04.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: [] |
||||||||||||||
| dead | 552 |
|
2026-03-09 11:23:36 | 2026-03-09 16:32:06 | 2026-03-09 16:35:21 | 0:03:15 | 0:00:54 | 0:02:21 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} | 2 |
Failure Reason:
vm05.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: []vm03.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: [] |
||||||||||||||
| fail | 553 |
|
2026-03-09 11:23:37 | 2026-03-09 16:33:20 | 2026-03-09 17:06:31 | 0:33:11 | 0:30:22 | 0:02:49 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/upgrade/{1-start-distro/1-start-ubuntu_22.04 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls agent/on mon_election/connectivity} | 2 |
Failure Reason:
Command failed on vm02 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 35f3d6ac-1bd6-11f1-80fc-9d78c02c1c0a -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c \'ceph orch upgrade check quay.ceph.io/ceph-ci/ceph:$sha1 | jq -e \'"\'"\'.up_to_date | length == 7\'"\'"\'\'' |
||||||||||||||
| fail | 554 |
|
2026-03-09 11:23:37 | 2026-03-09 16:34:29 | 2026-03-09 16:39:34 | 0:05:05 | 0:02:17 | 0:02:48 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} | 2 |
Failure Reason:
Command failed on vm06 with status 1: "grep '^nvme_loop' /proc/modules || sudo modprobe nvme_loop && sudo mkdir -p /sys/kernel/config/nvmet/hosts/hostnqn && sudo mkdir -p /sys/kernel/config/nvmet/ports/1 && echo loop | sudo tee /sys/kernel/config/nvmet/ports/1/addr_trtype" |
||||||||||||||
| dead | 555 |
|
2026-03-09 11:23:38 | 2026-03-09 16:35:33 | 2026-03-09 16:39:07 | 0:03:34 | 0:00:59 | 0:02:35 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_host_drain} | 3 |
Failure Reason:
vm04.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: []vm05.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: []vm01.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: [] |
||||||||||||||
| dead | 556 |
|
2026-03-09 11:23:38 | 2026-03-09 16:37:07 | 2026-03-09 16:40:21 | 0:03:14 | 0:00:42 | 0:02:32 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/rgw 3-final} | 2 |
Failure Reason:
vm03.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: []vm10.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: [] |
||||||||||||||
| dead | 557 |
|
2026-03-09 11:23:39 | 2026-03-09 16:38:21 | 2026-03-09 16:41:24 | 0:03:03 | 0:01:13 | 0:01:50 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no kernel overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.1} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 |
Failure Reason:
vm07.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: []vm11.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: [] |
||||||||||||||
| dead | 558 |
|
2026-03-09 11:23:39 | 2026-03-09 16:39:23 | 2026-03-09 16:42:01 | 0:02:38 | 0:00:30 | 0:02:08 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/no-agent-workunits/{0-distro/centos_9.stream_runc mon_election/connectivity task/test_adoption} | 1 |
Failure Reason:
vm05.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: [] |
||||||||||||||
| fail | 559 |
|
2026-03-09 11:23:39 | 2026-03-09 16:40:00 | 2026-03-09 16:45:40 | 0:05:40 | 0:02:25 | 0:03:15 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/repave-all} | 2 |
Failure Reason:
Command failed on vm06 with status 1: "grep '^nvme_loop' /proc/modules || sudo modprobe nvme_loop && sudo mkdir -p /sys/kernel/config/nvmet/hosts/hostnqn && sudo mkdir -p /sys/kernel/config/nvmet/ports/1 && echo loop | sudo tee /sys/kernel/config/nvmet/ports/1/addr_trtype" |
||||||||||||||
| dead | 560 |
|
2026-03-09 11:23:40 | 2026-03-09 16:41:40 | 2026-03-09 16:44:43 | 0:03:03 | 0:00:47 | 0:02:16 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_basic} | 2 |
Failure Reason:
vm11.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: []vm10.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: [] |
||||||||||||||
| fail | 561 |
|
2026-03-09 11:23:40 | 2026-03-09 16:42:42 | 2026-03-09 16:47:46 | 0:05:04 | 0:02:35 | 0:02:29 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 2 |
Failure Reason:
Command failed on vm03 with status 1: "grep '^nvme_loop' /proc/modules || sudo modprobe nvme_loop && sudo mkdir -p /sys/kernel/config/nvmet/hosts/hostnqn && sudo mkdir -p /sys/kernel/config/nvmet/ports/1 && echo loop | sudo tee /sys/kernel/config/nvmet/ports/1/addr_trtype" |
||||||||||||||
| dead | 562 |
|
2026-03-09 11:23:41 | 2026-03-09 16:43:46 | 2026-03-09 16:46:55 | 0:03:09 | 0:00:32 | 0:02:37 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/basic 3-final} | 2 |
Failure Reason:
vm05.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: []vm04.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: [] |
||||||||||||||
| pass | 563 |
|
2026-03-09 11:23:41 | 2026-03-09 16:44:55 | 2026-03-09 17:20:19 | 0:35:24 | 0:32:56 | 0:02:28 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/thrash/{0-distro/ubuntu_22.04 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v1only root} | 2 |
| fail | 564 |
|
2026-03-09 11:23:42 | 2026-03-09 16:46:16 | 2026-03-09 16:51:27 | 0:05:11 | 0:03:40 | 0:01:31 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/client-keyring 3-final} | 2 |
Failure Reason:
Command failed on vm06 with status 1: "grep '^nvme_loop' /proc/modules || sudo modprobe nvme_loop && sudo mkdir -p /sys/kernel/config/nvmet/hosts/hostnqn && sudo mkdir -p /sys/kernel/config/nvmet/ports/1 && echo loop | sudo tee /sys/kernel/config/nvmet/ports/1/addr_trtype" |
||||||||||||||
| dead | 565 |
|
2026-03-09 11:23:42 | 2026-03-09 16:47:27 | 2026-03-09 16:50:10 | 0:02:43 | 0:00:41 | 0:02:02 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/workunits/{0-distro/ubuntu_22.04 agent/off mon_election/classic task/test_iscsi_container/{centos_9.stream test_iscsi_container}} | 1 |
Failure Reason:
vm05.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: [] |
||||||||||||||
| dead | 566 |
|
2026-03-09 11:23:43 | 2026-03-09 16:48:10 | 2026-03-09 16:51:19 | 0:03:09 | 0:00:43 | 0:02:26 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/iscsi 3-final} | 2 |
Failure Reason:
vm03.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: []vm07.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: [] |
||||||||||||||
| dead | 567 |
|
2026-03-09 11:23:43 | 2026-03-09 16:49:18 | 2026-03-09 16:52:53 | 0:03:35 | 0:00:45 | 0:02:50 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 3 |
Failure Reason:
vm01.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: []vm04.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: []vm00.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: [] |
||||||||||||||
| dead | 568 |
|
2026-03-09 11:23:44 | 2026-03-09 16:50:52 | 2026-03-09 16:54:36 | 0:03:44 | 0:00:42 | 0:03:02 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rotate-keys} | 2 |
Failure Reason:
vm03.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: []vm05.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: [] |
||||||||||||||
| dead | 569 |
|
2026-03-09 11:23:44 | 2026-03-09 16:52:35 | 2026-03-09 16:57:44 | 0:05:09 | 0:02:12 | 0:02:57 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/jaeger 3-final} | 2 |
Failure Reason:
vm06.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: []vm09.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: [] |
||||||||||||||
| dead | 570 |
|
2026-03-09 11:23:45 | 2026-03-09 16:53:44 | 2026-03-09 16:57:18 | 0:03:34 | 0:01:37 | 0:01:57 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_monitoring_stack_basic} | 3 |
Failure Reason:
vm01.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: []vm04.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: []vm07.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: [] |
||||||||||||||
| dead | 571 |
|
2026-03-09 11:23:45 | 2026-03-09 16:55:17 | 2026-03-09 16:58:32 | 0:03:15 | 0:00:34 | 0:02:41 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 |
Failure Reason:
vm03.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: []vm00.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: [] |
||||||||||||||
| dead | 572 |
|
2026-03-09 11:23:45 | 2026-03-09 16:56:32 | 2026-03-09 17:00:28 | 0:03:56 | 0:00:51 | 0:03:05 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes kernel overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 |
Failure Reason:
vm04.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: []vm07.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: [] |
||||||||||||||
| pass | 573 |
|
2026-03-09 11:23:46 | 2026-03-09 16:58:27 | 2026-03-09 17:11:12 | 0:12:45 | 0:10:45 | 0:02:00 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/no-agent-workunits/{0-distro/ubuntu_22.04 mon_election/classic task/test_cephadm_timeout} | 1 |
| pass | 574 |
|
2026-03-09 11:23:46 | 2026-03-09 16:59:11 | 2026-03-09 17:10:27 | 0:11:16 | 0:08:43 | 0:02:33 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/smb/{0-distro/ubuntu_22.04 tasks/deploy_smb_domain} | 2 |
| fail | 575 |
|
2026-03-09 11:23:47 | 2026-03-09 17:00:26 | 2026-03-09 17:05:37 | 0:05:11 | 0:02:16 | 0:02:55 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/mirror 3-final} | 2 |
Failure Reason:
Command failed on vm00 with status 1: "grep '^nvme_loop' /proc/modules || sudo modprobe nvme_loop && sudo mkdir -p /sys/kernel/config/nvmet/hosts/hostnqn && sudo mkdir -p /sys/kernel/config/nvmet/ports/1 && echo loop | sudo tee /sys/kernel/config/nvmet/ports/1/addr_trtype" |
||||||||||||||
| dead | 576 |
|
2026-03-09 11:23:47 | 2026-03-09 17:01:36 | 2026-03-09 17:04:51 | 0:03:15 | 0:00:42 | 0:02:33 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/upgrade/{1-start-distro/1-start-centos_9.stream 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls agent/on mon_election/classic} | 2 |
Failure Reason:
vm07.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: []vm04.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: [] |
||||||||||||||
| dead | 577 |
|
2026-03-09 11:23:48 | 2026-03-09 17:02:51 | 2026-03-09 17:08:01 | 0:05:10 | 0:00:32 | 0:04:38 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-haproxy-proto 3-final} | 2 |
Failure Reason:
vm07.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: []vm04.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: [] |
||||||||||||||
| dead | 578 |
|
2026-03-09 11:23:48 | 2026-03-09 17:06:00 | 2026-03-09 17:09:41 | 0:03:41 | 0:01:03 | 0:02:38 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_rgw_multisite} | 3 |
Failure Reason:
vm00.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: []vm03.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: []vm05.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: [] |
||||||||||||||
| dead | 579 |
|
2026-03-09 11:23:49 | 2026-03-09 17:07:40 | 2026-03-09 17:10:51 | 0:03:11 | 0:00:50 | 0:02:21 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 |
Failure Reason:
vm02.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: []vm08.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: [] |
||||||||||||||
| dead | 580 |
|
2026-03-09 11:23:49 | 2026-03-09 17:08:50 | 2026-03-09 17:12:00 | 0:03:10 | 0:00:59 | 0:02:11 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} | 2 |
Failure Reason:
vm07.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: []vm04.local: _ansible_no_log: false changed: false invocation: module_args: allow_downgrade: false allowerasing: false autoremove: false best: null bugfix: false cacheonly: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - ceph - ceph-base - ceph-selinux - ceph-common - ceph-debuginfo - ceph-release - libcephfs1 - ceph-radosgw - python-ceph - python-rados - python-rbd - python-cephfs - librbd1 - librados2 - mod_fastcgi nobest: null releasever: null security: false skip_broken: false sslverify: true state: absent update_cache: false update_only: false use_backend: auto validate_certs: true msg: 'Failed to download metadata for repo ''baseos'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: [] |
||||||||||||||
| fail | 581 |
|
2026-03-09 11:23:50 | 2026-03-09 17:09:59 | 2026-03-09 17:21:20 | 0:11:21 | 0:09:22 | 0:01:59 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 |
Failure Reason:
"2026-03-09T17:17:45.195171+0000 mon.vm03 (mon.0) 494 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log |
||||||||||||||
| fail | 582 |
|
2026-03-09 11:23:50 | 2026-03-09 17:11:19 | 2026-03-09 17:16:36 | 0:05:17 | 0:02:16 | 0:03:01 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} | 2 |
Failure Reason:
Command failed on vm06 with status 1: "grep '^nvme_loop' /proc/modules || sudo modprobe nvme_loop && sudo mkdir -p /sys/kernel/config/nvmet/hosts/hostnqn && sudo mkdir -p /sys/kernel/config/nvmet/ports/1 && echo loop | sudo tee /sys/kernel/config/nvmet/ports/1/addr_trtype" |
||||||||||||||
| pass | 583 |
|
2026-03-09 11:23:50 | 2026-03-09 17:12:35 | 2026-03-09 18:05:48 | 0:53:13 | 0:51:37 | 0:01:36 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} | 2 |
| pass | 584 |
|
2026-03-09 11:23:51 | 2026-03-09 17:13:45 | 2026-03-09 17:29:28 | 0:15:43 | 0:13:13 | 0:02:30 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_set_mon_crush_locations} | 3 |
| fail | 585 |
|
2026-03-09 11:23:51 | 2026-03-09 17:15:27 | 2026-03-09 17:45:28 | 0:30:01 | 0:24:49 | 0:05:12 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no kernel overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.0} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 |
Failure Reason:
reached maximum tries (50) after waiting for 300 seconds |
||||||||||||||
| dead | 586 |
|
2026-03-09 11:23:52 | 2026-03-09 17:19:26 | 2026-03-09 19:24:51 | 2:05:25 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/mgr-nfs-upgrade/{0-centos_9.stream 1-bootstrap/17.2.0 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | ||
Failure Reason:
hit max job timeout |
||||||||||||||
| pass | 587 |
|
2026-03-09 11:23:52 | 2026-03-09 17:22:44 | 2026-03-09 18:01:23 | 0:38:39 | 0:37:19 | 0:01:20 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/nfs/{cluster/{1-node} conf/{client mds mgr mon osd} overrides/{ignore_mgr_down ignorelist_health pg_health} supported-random-distros$/{ubuntu_latest} tasks/nfs} | 1 |
| pass | 588 |
|
2026-03-09 11:23:53 | 2026-03-09 17:23:21 | 2026-03-09 17:34:41 | 0:11:20 | 0:08:17 | 0:03:03 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/no-agent-workunits/{0-distro/centos_9.stream mon_election/connectivity task/test_orch_cli} | 1 |
| pass | 589 |
|
2026-03-09 11:23:53 | 2026-03-09 17:24:39 | 2026-03-09 17:39:19 | 0:14:40 | 0:07:48 | 0:06:52 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/orchestrator_cli/{0-random-distro$/{centos_9.stream_runc} 2-node-mgr agent/off orchestrator_cli} | 2 |
| fail | 590 |
|
2026-03-09 11:23:54 | 2026-03-09 17:31:19 | 2026-03-09 17:55:19 | 0:24:00 | 0:16:30 | 0:07:30 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/rbd_iscsi/{base/install cluster/{fixed-3 openstack} conf/{disable-pool-app} supported-container-hosts$/{centos_9.stream} workloads/cephadm_iscsi} | 3 |
Failure Reason:
"grep: /var/log/ceph/01455850-1bdf-11f1-910a-9936d43313cc/ceph.log: No such file or directory" in cluster log |
||||||||||||||
| pass | 591 |
|
2026-03-09 11:23:54 | 2026-03-09 17:37:17 | 2026-03-09 17:49:13 | 0:11:56 | 0:06:14 | 0:05:42 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_basic} | 2 |
| fail | 592 |
|
2026-03-09 11:23:55 | 2026-03-09 17:41:12 | 2026-03-09 17:55:05 | 0:13:53 | 0:07:34 | 0:06:19 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} | 2 |
Failure Reason:
"2026-03-09T17:51:58.948501+0000 mon.vm06 (mon.0) 492 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log |
||||||||||||||
| pass | 593 |
|
2026-03-09 11:23:55 | 2026-03-09 17:47:04 | 2026-03-09 18:00:00 | 0:12:56 | 0:08:06 | 0:04:50 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/smoke-singlehost/{0-random-distro$/{ubuntu_22.04} 1-start 2-services/basic 3-final} | 1 |
| pass | 594 |
|
2026-03-09 11:23:55 | 2026-03-09 17:49:59 | 2026-03-09 18:05:17 | 0:15:18 | 0:06:51 | 0:08:27 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 3 |
| pass | 595 |
|
2026-03-09 11:23:56 | 2026-03-09 17:57:16 | 2026-03-09 18:34:39 | 0:37:23 | 0:34:09 | 0:03:14 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/thrash/{0-distro/centos_9.stream_runc 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async root} | 2 |
| fail | 596 |
|
2026-03-09 11:23:56 | 2026-03-09 17:58:36 | 2026-03-09 18:11:36 | 0:13:00 | 0:07:59 | 0:05:01 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} | 2 |
Failure Reason:
"2026-03-09T18:06:33.671027+0000 mon.vm04 (mon.0) 494 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log |
||||||||||||||
| pass | 597 |
|
2026-03-09 11:23:57 | 2026-03-09 18:01:35 | 2026-03-09 18:16:23 | 0:14:48 | 0:08:52 | 0:05:56 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/workunits/{0-distro/ubuntu_22.04 agent/off mon_election/classic task/test_ca_signed_key} | 2 |
| fail | 598 |
|
2026-03-09 11:23:57 | 2026-03-09 18:06:22 | 2026-03-09 18:11:38 | 0:05:16 | 0:02:13 | 0:03:03 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 |
Failure Reason:
Command failed on vm00 with status 1: "grep '^nvme_loop' /proc/modules || sudo modprobe nvme_loop && sudo mkdir -p /sys/kernel/config/nvmet/hosts/hostnqn && sudo mkdir -p /sys/kernel/config/nvmet/ports/1 && echo loop | sudo tee /sys/kernel/config/nvmet/ports/1/addr_trtype" |
||||||||||||||
| fail | 599 |
|
2026-03-09 11:23:58 | 2026-03-09 18:07:37 | 2026-03-09 18:10:54 | 0:03:17 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-keepalive-only 3-final} | 2 | ||
Failure Reason:
machine vm07.local is not locked |
||||||||||||||
| fail | 600 |
|
2026-03-09 11:23:58 | 2026-03-09 18:08:53 | 2026-03-09 18:23:11 | 0:14:18 | 0:09:08 | 0:05:10 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs 3-final} | 2 |
Failure Reason:
"2026-03-09T18:19:32.223812+0000 mon.vm06 (mon.0) 496 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log |
||||||||||||||
| fail | 601 |
|
2026-03-09 11:23:59 | 2026-03-09 18:13:10 | 2026-03-09 18:20:12 | 0:07:02 | 0:05:56 | 0:01:06 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_cephadm} | 1 |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on vm04 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=569c3e99c9b32a51b4eaf08731c728f4513ed589 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
| fail | 602 |
|
2026-03-09 11:23:59 | 2026-03-09 18:14:12 | 2026-03-09 18:53:18 | 0:39:06 | 0:37:03 | 0:02:03 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/upgrade/{1-start-distro/1-start-ubuntu_22.04 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/connectivity} | 2 |
Failure Reason:
Command failed on vm00 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 --force' |
||||||||||||||
| pass | 603 |
|
2026-03-09 11:24:00 | 2026-03-09 18:15:16 | 2026-03-09 18:25:51 | 0:10:35 | 0:07:55 | 0:02:40 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smoke/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 |
| pass | 604 |
|
2026-03-09 11:24:00 | 2026-03-09 18:17:50 | 2026-03-09 18:37:51 | 0:20:01 | 0:15:39 | 0:04:22 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_python} | 2 |
| fail | 605 |
|
2026-03-09 11:24:01 | 2026-03-09 18:21:50 | 2026-03-09 18:38:59 | 0:17:09 | 0:13:41 | 0:03:28 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes kernel overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 |
Failure Reason:
"grep: /var/log/ceph/29d79172-1be7-11f1-8e1e-79004c1f7e6c/ceph.log: No such file or directory" in cluster log |
||||||||||||||
| pass | 606 |
|
2026-03-09 11:24:01 | 2026-03-09 18:24:58 | 2026-03-09 18:56:55 | 0:31:57 | 0:15:16 | 0:16:41 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/no-agent-workunits/{0-distro/centos_9.stream_runc mon_election/classic task/test_orch_cli_mon} | 5 |
| pass | 607 |
|
2026-03-09 11:24:02 | 2026-03-09 18:40:54 | 2026-03-09 18:50:57 | 0:10:03 | 0:06:01 | 0:04:02 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_domain} | 2 |
| fail | 608 |
|
2026-03-09 11:24:02 | 2026-03-09 18:42:56 | 2026-03-09 19:02:23 | 0:19:27 | 0:07:55 | 0:11:32 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs2 3-final} | 2 |
Failure Reason:
"2026-03-09T18:57:20.187702+0000 mon.vm07 (mon.0) 493 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log |
||||||||||||||
| fail | 609 |
|
2026-03-09 11:24:02 | 2026-03-09 18:52:22 | 2026-03-09 19:02:48 | 0:10:26 | 0:07:32 | 0:02:54 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 |
Failure Reason:
"2026-03-09T18:59:34.431309+0000 mon.vm06 (mon.0) 495 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log |
||||||||||||||
| pass | 610 |
|
2026-03-09 11:24:03 | 2026-03-09 18:54:47 | 2026-03-09 19:26:21 | 0:31:34 | 0:27:47 | 0:03:47 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/thrash/{0-distro/ubuntu_22.04 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v1only root} | 2 |
| fail | 611 |
|
2026-03-09 11:24:03 | 2026-03-09 18:58:19 | 2026-03-09 19:03:23 | 0:05:04 | 0:02:27 | 0:02:37 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nvmeof 3-final} | 2 |
Failure Reason:
Command failed on vm01 with status 1: "grep '^nvme_loop' /proc/modules || sudo modprobe nvme_loop && sudo mkdir -p /sys/kernel/config/nvmet/hosts/hostnqn && sudo mkdir -p /sys/kernel/config/nvmet/ports/1 && echo loop | sudo tee /sys/kernel/config/nvmet/ports/1/addr_trtype" |
||||||||||||||
| pass | 612 |
|
2026-03-09 11:24:04 | 2026-03-09 18:59:23 | 2026-03-09 19:04:07 | 0:04:44 | 0:03:28 | 0:01:16 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_cephadm_repos} | 1 |
| fail | 613 |
|
2026-03-09 11:24:04 | 2026-03-09 19:00:06 | 2026-03-09 19:13:45 | 0:13:39 | 0:08:16 | 0:05:23 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} | 2 |
Failure Reason:
"2026-03-09T19:09:17.266740+0000 mon.vm07 (mon.0) 498 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log |
||||||||||||||
| pass | 614 |
|
2026-03-09 11:24:05 | 2026-03-09 19:03:44 | 2026-03-09 19:13:36 | 0:09:52 | 0:06:57 | 0:02:55 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 3 |
| fail | 615 |
|
2026-03-09 11:24:05 | 2026-03-09 19:05:36 | 2026-03-09 19:17:03 | 0:11:27 | 0:08:15 | 0:03:12 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/rgw 3-final} | 2 |
Failure Reason:
"2026-03-09T19:12:27.733870+0000 mon.vm03 (mon.0) 494 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log |
||||||||||||||
| pass | 616 |
|
2026-03-09 11:24:06 | 2026-03-09 19:07:02 | 2026-03-09 19:28:28 | 0:21:26 | 0:11:26 | 0:10:00 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_extra_daemon_features} | 2 |
| fail | 617 |
|
2026-03-09 11:24:06 | 2026-03-09 19:16:27 | 2026-03-09 19:43:44 | 0:27:17 | 0:25:50 | 0:01:27 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no kernel overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{reef} 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 |
Failure Reason:
reached maximum tries (50) after waiting for 300 seconds |
||||||||||||||
| pass | 618 |
|
2026-03-09 11:24:07 | 2026-03-09 19:17:42 | 2026-03-09 19:24:26 | 0:06:44 | 0:05:28 | 0:01:16 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/no-agent-workunits/{0-distro/ubuntu_22.04 mon_election/connectivity task/test_adoption} | 1 |
| fail | 619 |
|
2026-03-09 11:24:07 | 2026-03-09 19:18:25 | 2026-03-09 19:29:47 | 0:11:22 | 0:08:35 | 0:02:47 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/repave-all} | 2 |
Failure Reason:
"2026-03-09T19:25:13.425450+0000 mon.vm05 (mon.0) 492 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log |
||||||||||||||
| pass | 620 |
|
2026-03-09 11:24:07 | 2026-03-09 19:19:46 | 2026-03-09 19:34:03 | 0:14:17 | 0:08:44 | 0:05:33 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/smb/{0-distro/ubuntu_22.04 tasks/deploy_smb_basic} | 2 |
| fail | 621 |
|
2026-03-09 11:24:08 | 2026-03-09 19:24:02 | 2026-03-09 19:29:52 | 0:05:50 | 0:02:46 | 0:03:04 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/basic 3-final} | 2 |
Failure Reason:
Command failed on vm02 with status 1: "grep '^nvme_loop' /proc/modules || sudo modprobe nvme_loop && sudo mkdir -p /sys/kernel/config/nvmet/hosts/hostnqn && sudo mkdir -p /sys/kernel/config/nvmet/ports/1 && echo loop | sudo tee /sys/kernel/config/nvmet/ports/1/addr_trtype" |
||||||||||||||
| pass | 622 |
|
2026-03-09 11:24:08 | 2026-03-09 19:25:51 | 2026-03-09 19:45:54 | 0:20:03 | 0:17:04 | 0:02:59 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rotate-keys} | 2 |
| fail | 623 |
|
2026-03-09 11:24:09 | 2026-03-09 19:27:53 | 2026-03-09 19:40:07 | 0:12:14 | 0:08:02 | 0:04:12 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/client-keyring 3-final} | 2 |
Failure Reason:
"2026-03-09T19:35:25.687971+0000 mon.vm01 (mon.0) 496 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log |
||||||||||||||
| pass | 624 |
|
2026-03-09 11:24:09 | 2026-03-09 19:30:06 | 2026-03-09 19:59:23 | 0:29:17 | 0:27:08 | 0:02:09 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/thrash/{0-distro/centos_9.stream 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v2only root} | 2 |
| pass | 625 |
|
2026-03-09 11:24:10 | 2026-03-09 19:31:21 | 2026-03-09 19:55:58 | 0:24:37 | 0:19:11 | 0:05:26 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_host_drain} | 3 |
| fail | 626 |
|
2026-03-09 11:24:10 | 2026-03-09 19:35:57 | 2026-03-09 19:45:17 | 0:09:20 | 0:02:47 | 0:06:33 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 |
Failure Reason:
Command failed on vm01 with status 1: "grep '^nvme_loop' /proc/modules || sudo modprobe nvme_loop && sudo mkdir -p /sys/kernel/config/nvmet/hosts/hostnqn && sudo mkdir -p /sys/kernel/config/nvmet/ports/1 && echo loop | sudo tee /sys/kernel/config/nvmet/ports/1/addr_trtype" |
||||||||||||||
| fail | 627 |
|
2026-03-09 11:24:11 | 2026-03-09 19:41:16 | 2026-03-09 19:55:04 | 0:13:48 | 0:08:29 | 0:05:19 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/iscsi 3-final} | 2 |
Failure Reason:
"2026-03-09T19:50:36.490682+0000 mon.vm07 (mon.0) 500 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log |
||||||||||||||
| pass | 628 |
|
2026-03-09 11:24:11 | 2026-03-09 19:45:03 | 2026-03-09 20:22:43 | 0:37:40 | 0:34:21 | 0:03:19 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/upgrade/{1-start-distro/1-start-centos_9.stream 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/classic} | 2 |
| fail | 629 |
|
2026-03-09 11:24:12 | 2026-03-09 19:46:40 | 2026-03-09 19:51:56 | 0:05:16 | 0:02:50 | 0:02:26 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/jaeger 3-final} | 2 |
Failure Reason:
Command failed on vm00 with status 1: "grep '^nvme_loop' /proc/modules || sudo modprobe nvme_loop && sudo mkdir -p /sys/kernel/config/nvmet/hosts/hostnqn && sudo mkdir -p /sys/kernel/config/nvmet/ports/1 && echo loop | sudo tee /sys/kernel/config/nvmet/ports/1/addr_trtype" |
||||||||||||||
| fail | 630 |
|
2026-03-09 11:24:12 | 2026-03-09 19:47:56 | 2026-03-09 19:57:27 | 0:09:31 | 0:02:16 | 0:07:15 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 |
Failure Reason:
Command failed on vm00 with status 1: "grep '^nvme_loop' /proc/modules || sudo modprobe nvme_loop && sudo mkdir -p /sys/kernel/config/nvmet/hosts/hostnqn && sudo mkdir -p /sys/kernel/config/nvmet/ports/1 && echo loop | sudo tee /sys/kernel/config/nvmet/ports/1/addr_trtype" |
||||||||||||||
| fail | 631 |
|
2026-03-09 11:24:13 | 2026-03-09 19:53:26 | 2026-03-09 20:08:16 | 0:14:50 | 0:12:38 | 0:02:12 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_iscsi_container/{centos_9.stream test_iscsi_container}} | 1 |
Failure Reason:
Command failed (workunit test cephadm/test_iscsi_pids_limit.sh) on vm04 with status 125: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=569c3e99c9b32a51b4eaf08731c728f4513ed589 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_iscsi_pids_limit.sh' |
||||||||||||||
| fail | 632 |
|
2026-03-09 11:24:13 | 2026-03-09 19:54:15 | 2026-03-09 20:02:40 | 0:08:25 | 0:04:41 | 0:03:44 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes kernel overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 |
Failure Reason:
"grep: /var/log/ceph/b207a5fe-1bf2-11f1-9460-c949d5f85abc/ceph.log: No such file or directory" in cluster log |
||||||||||||||
| pass | 633 |
|
2026-03-09 11:24:13 | 2026-03-09 19:56:39 | 2026-03-09 20:07:23 | 0:10:44 | 0:09:32 | 0:01:12 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/no-agent-workunits/{0-distro/centos_9.stream mon_election/classic task/test_cephadm_timeout} | 1 |
| pass | 634 |
|
2026-03-09 11:24:14 | 2026-03-09 19:57:22 | 2026-03-09 20:08:38 | 0:11:16 | 0:08:29 | 0:02:47 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/orchestrator_cli/{0-random-distro$/{centos_9.stream_runc} 2-node-mgr agent/on orchestrator_cli} | 2 |
| pass | 635 |
|
2026-03-09 11:24:14 | 2026-03-09 19:58:36 | 2026-03-09 20:07:52 | 0:09:16 | 0:06:17 | 0:02:59 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_domain} | 2 |
| fail | 636 |
|
2026-03-09 11:24:15 | 2026-03-09 19:59:51 | 2026-03-09 20:09:25 | 0:09:34 | 0:07:18 | 0:02:16 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/mirror 3-final} | 2 |
Failure Reason:
"2026-03-09T20:06:03.642633+0000 mon.vm05 (mon.0) 492 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log |
||||||||||||||
| pass | 637 |
|
2026-03-09 11:24:15 | 2026-03-09 20:01:24 | 2026-03-09 20:11:34 | 0:10:10 | 0:06:04 | 0:04:06 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smoke-singlehost/{0-random-distro$/{centos_9.stream} 1-start 2-services/rgw 3-final} | 1 |
| pass | 638 |
|
2026-03-09 11:24:16 | 2026-03-09 20:03:33 | 2026-03-09 20:17:34 | 0:14:01 | 0:07:18 | 0:06:43 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 3 |
| fail | 639 |
|
2026-03-09 11:24:16 | 2026-03-09 20:09:33 | 2026-03-09 20:22:49 | 0:13:16 | 0:10:14 | 0:03:02 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-haproxy-proto 3-final} | 2 |
Failure Reason:
"2026-03-09T20:18:08.135317+0000 mon.vm10 (mon.0) 494 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log |
||||||||||||||
| pass | 640 |
|
2026-03-09 11:24:17 | 2026-03-09 20:10:48 | 2026-03-09 20:58:06 | 0:47:18 | 0:44:46 | 0:02:32 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} | 2 |
| pass | 641 |
|
2026-03-09 11:24:17 | 2026-03-09 20:12:03 | 2026-03-09 20:29:52 | 0:17:49 | 0:15:56 | 0:01:53 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/workunits/{0-distro/ubuntu_22.04 agent/off mon_election/classic task/test_monitoring_stack_basic} | 3 |
| fail | 642 |
|
2026-03-09 11:24:18 | 2026-03-09 20:13:51 | 2026-03-09 20:22:46 | 0:08:55 | 0:02:11 | 0:06:44 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} | 2 |
Failure Reason:
Command failed on vm00 with status 1: "grep '^nvme_loop' /proc/modules || sudo modprobe nvme_loop && sudo mkdir -p /sys/kernel/config/nvmet/hosts/hostnqn && sudo mkdir -p /sys/kernel/config/nvmet/ports/1 && echo loop | sudo tee /sys/kernel/config/nvmet/ports/1/addr_trtype" |
||||||||||||||
| fail | 643 |
|
2026-03-09 11:24:18 | 2026-03-09 20:18:46 | 2026-03-09 20:32:08 | 0:13:22 | 0:07:29 | 0:05:53 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 |
Failure Reason:
"2026-03-09T20:28:57.751674+0000 mon.vm07 (mon.0) 495 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log |
||||||||||||||
| fail | 644 |
|
2026-03-09 11:24:18 | 2026-03-09 20:24:08 | 2026-03-09 20:33:17 | 0:09:09 | 0:07:45 | 0:01:24 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} | 2 |
Failure Reason:
"2026-03-09T20:30:19.156049+0000 mon.vm02 (mon.0) 495 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log |
||||||||||||||
| pass | 645 |
|
2026-03-09 11:24:19 | 2026-03-09 20:25:16 | 2026-03-09 20:35:07 | 0:09:51 | 0:07:03 | 0:02:48 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 |
| pass | 646 |
|
2026-03-09 11:24:19 | 2026-03-09 20:27:06 | 2026-03-09 20:41:44 | 0:14:38 | 0:08:30 | 0:06:08 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_rgw_multisite} | 3 |
| fail | 647 |
|
2026-03-09 11:24:20 | 2026-03-09 20:31:43 | 2026-03-09 21:03:34 | 0:31:51 | 0:29:11 | 0:02:40 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no kernel overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{reef} 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 |
Failure Reason:
reached maximum tries (50) after waiting for 300 seconds |
||||||||||||||
| pass | 648 |
|
2026-03-09 11:24:20 | 2026-03-09 20:33:32 | 2026-03-09 20:42:16 | 0:08:44 | 0:07:43 | 0:01:01 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/no-agent-workunits/{0-distro/centos_9.stream_runc mon_election/connectivity task/test_orch_cli} | 1 |
| pass | 649 |
|
2026-03-09 11:24:21 | 2026-03-09 20:34:15 | 2026-03-09 20:41:36 | 0:07:21 | 0:05:35 | 0:01:46 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_basic} | 2 |
| fail | 650 |
|
2026-03-09 11:24:21 | 2026-03-09 20:35:35 | 2026-03-09 20:44:50 | 0:09:15 | 0:07:45 | 0:01:30 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} | 2 |
Failure Reason:
"2026-03-09T20:41:38.648457+0000 mon.vm01 (mon.0) 495 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log |
||||||||||||||
| fail | 651 |
|
2026-03-09 11:24:22 | 2026-03-09 20:36:49 | 2026-03-09 20:46:48 | 0:09:59 | 0:02:14 | 0:07:45 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} | 2 |
Failure Reason:
Command failed on vm00 with status 1: "grep '^nvme_loop' /proc/modules || sudo modprobe nvme_loop && sudo mkdir -p /sys/kernel/config/nvmet/hosts/hostnqn && sudo mkdir -p /sys/kernel/config/nvmet/ports/1 && echo loop | sudo tee /sys/kernel/config/nvmet/ports/1/addr_trtype" |
||||||||||||||
| dead | 652 |
|
2026-03-09 11:24:22 | 2026-03-09 20:42:48 | 2026-03-09 22:46:10 | 2:03:22 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/upgrade/{1-start-distro/1-start-ubuntu_22.04 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls agent/on mon_election/connectivity} | 2 | ||
Failure Reason:
hit max job timeout |
||||||||||||||
| pass | 653 |
|
2026-03-09 11:24:23 | 2026-03-09 20:43:57 | 2026-03-09 20:58:48 | 0:14:51 | 0:10:18 | 0:04:33 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_set_mon_crush_locations} | 3 |
| fail | 654 |
|
2026-03-09 11:24:23 | 2026-03-09 20:46:47 | 2026-03-09 20:58:19 | 0:11:32 | 0:08:12 | 0:03:20 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 |
Failure Reason:
"2026-03-09T20:53:38.398362+0000 mon.vm00 (mon.0) 493 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log |
||||||||||||||
| pass | 655 |
|
2026-03-09 11:24:24 | 2026-03-09 20:48:19 | 2026-03-09 21:43:23 | 0:55:04 | 0:42:43 | 0:12:21 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/thrash/{0-distro/ubuntu_22.04 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} | 2 |
| fail | 656 |
|
2026-03-09 11:24:24 | 2026-03-09 20:59:20 | 2026-03-09 21:10:36 | 0:11:16 | 0:08:40 | 0:02:36 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-keepalive-only 3-final} | 2 |
Failure Reason:
"2026-03-09T21:06:28.263222+0000 mon.vm00 (mon.0) 482 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log |
||||||||||||||
| pass | 657 |
|
2026-03-09 11:24:25 | 2026-03-09 21:00:35 | 2026-03-09 21:10:23 | 0:09:48 | 0:07:28 | 0:02:20 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 3 |
| pass | 658 |
|
2026-03-09 11:24:25 | 2026-03-09 21:02:22 | 2026-03-09 21:26:54 | 0:24:32 | 0:21:22 | 0:03:10 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_python} | 2 |
| fail | 659 |
|
2026-03-09 11:24:25 | 2026-03-09 21:04:53 | 2026-03-09 21:21:47 | 0:16:54 | 0:08:26 | 0:08:28 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs 3-final} | 2 |
Failure Reason:
"2026-03-09T21:17:09.774106+0000 mon.vm01 (mon.0) 495 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log |
||||||||||||||
| pass | 660 |
|
2026-03-09 11:24:26 | 2026-03-09 21:11:46 | 2026-03-09 21:31:02 | 0:19:16 | 0:17:32 | 0:01:44 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_ca_signed_key} | 2 |
| fail | 661 |
|
2026-03-09 11:24:26 | 2026-03-09 21:13:00 | 2026-03-09 21:20:10 | 0:07:10 | 0:04:18 | 0:02:52 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes kernel overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 |
Failure Reason:
"grep: /var/log/ceph/79ecf65a-1bfd-11f1-82dd-e5af7d3bb33e/ceph.log: No such file or directory" in cluster log |
||||||||||||||
| pass | 662 |
|
2026-03-09 11:24:27 | 2026-03-09 21:14:10 | 2026-03-09 21:55:35 | 0:41:25 | 0:25:44 | 0:15:41 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/no-agent-workunits/{0-distro/ubuntu_22.04 mon_election/classic task/test_orch_cli_mon} | 5 |
| pass | 663 |
|
2026-03-09 11:24:27 | 2026-03-09 21:29:33 | 2026-03-09 21:42:21 | 0:12:48 | 0:09:05 | 0:03:43 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/smb/{0-distro/ubuntu_22.04 tasks/deploy_smb_domain} | 2 |
| fail | 664 |
|
2026-03-09 11:24:28 | 2026-03-09 21:32:20 | 2026-03-09 21:47:34 | 0:15:14 | 0:02:39 | 0:12:35 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs2 3-final} | 2 |
Failure Reason:
Command failed on vm00 with status 1: "grep '^nvme_loop' /proc/modules || sudo modprobe nvme_loop && sudo mkdir -p /sys/kernel/config/nvmet/hosts/hostnqn && sudo mkdir -p /sys/kernel/config/nvmet/ports/1 && echo loop | sudo tee /sys/kernel/config/nvmet/ports/1/addr_trtype" |
||||||||||||||
| fail | 665 |
|
2026-03-09 11:24:28 | 2026-03-09 21:43:33 | 2026-03-09 21:48:43 | 0:05:10 | 0:02:33 | 0:02:37 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 |
Failure Reason:
Command failed on vm05 with status 1: "grep '^nvme_loop' /proc/modules || sudo modprobe nvme_loop && sudo mkdir -p /sys/kernel/config/nvmet/hosts/hostnqn && sudo mkdir -p /sys/kernel/config/nvmet/ports/1 && echo loop | sudo tee /sys/kernel/config/nvmet/ports/1/addr_trtype" |
||||||||||||||
| fail | 666 |
|
2026-03-09 11:24:29 | 2026-03-09 21:44:43 | 2026-03-09 21:59:00 | 0:14:17 | 0:09:16 | 0:05:01 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nvmeof 3-final} | 2 |
Failure Reason:
"2026-03-09T21:55:28.355850+0000 mon.vm00 (mon.0) 494 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log |
||||||||||||||
| pass | 667 |
|
2026-03-09 11:24:29 | 2026-03-09 21:48:59 | 2026-03-09 22:00:15 | 0:11:16 | 0:08:20 | 0:02:56 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/smoke/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 |
| pass | 668 |
|
2026-03-09 11:24:30 | 2026-03-09 21:50:14 | 2026-03-09 22:00:59 | 0:10:45 | 0:09:50 | 0:00:55 | vps | clyso-debian-13 | ubuntu | 22.04 | orch/cephadm/workunits/{0-distro/ubuntu_22.04 agent/off mon_election/classic task/test_cephadm} | 1 |
| pass | 669 |
|
2026-03-09 11:24:30 | 2026-03-09 21:50:58 | 2026-03-09 22:21:03 | 0:30:05 | 0:22:46 | 0:07:19 | vps | clyso-debian-13 | centos | 9.stream | orch/cephadm/thrash/{0-distro/centos_9.stream 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} | 2 |