| User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Fail |
|---|---|---|---|---|---|---|---|---|---|
| irq0 | 2026-03-05 18:18:00 | 2026-03-05 22:16:57 | 2026-03-05 22:57:10 | 0:40:13 | orch:cephadm:osds | cobaltcore-storage-v19.2.3-fasttrack-3 | vps | c24117f | 15 |
| Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| fail | 100 |
|
2026-03-05 17:18:05 | 2026-03-05 22:36:23 | 2026-03-05 22:52:51 | 0:16:28 | 0:12:04 | 0:04:24 | vps | clyso-debian-13 | centos | 9.stream | orch:cephadm:osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 |
Failure Reason:
"2026-03-05T22:47:38.385867+0000 mon.vm02 (mon.0) 504 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log |
||||||||||||||
| fail | 101 |
|
2026-03-05 17:18:05 | 2026-03-05 22:38:50 | 2026-03-05 22:52:12 | 0:13:22 | 0:10:59 | 0:02:23 | vps | clyso-debian-13 | centos | 9.stream | orch:cephadm:osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/repave-all} | 2 |
Failure Reason:
"2026-03-05T22:47:43.365766+0000 mon.vm01 (mon.0) 500 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log |
||||||||||||||
| fail | 102 |
|
2026-03-05 17:18:06 | 2026-03-05 22:40:11 | 2026-03-05 22:45:50 | 0:05:39 | 0:02:15 | 0:03:24 | vps | clyso-debian-13 | ubuntu | 22.04 | orch:cephadm:osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 |
Failure Reason:
Command failed on vm06 with status 1: "grep '^nvme_loop' /proc/modules || sudo modprobe nvme_loop && sudo mkdir -p /sys/kernel/config/nvmet/hosts/hostnqn && sudo mkdir -p /sys/kernel/config/nvmet/ports/1 && echo loop | sudo tee /sys/kernel/config/nvmet/ports/1/addr_trtype" |
||||||||||||||
| fail | 103 |
|
2026-03-05 17:18:06 | 2026-03-05 22:41:50 | 2026-03-05 22:53:17 | 0:11:27 | 0:09:21 | 0:02:06 | vps | clyso-debian-13 | centos | 9.stream | orch:cephadm:osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 |
Failure Reason:
"2026-03-05T22:49:30.995739+0000 mon.vm00 (mon.0) 502 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log |
||||||||||||||
| fail | 104 |
|
2026-03-05 17:18:07 | 2026-03-05 22:43:16 | 2026-03-05 22:57:10 | 0:13:54 | 0:08:21 | 0:05:33 | vps | clyso-debian-13 | centos | 9.stream | orch:cephadm:osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 |
Failure Reason:
"2026-03-05T22:52:16.375038+0000 mon.vm06 (mon.0) 497 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log |
||||||||||||||
| fail | 105 |
|
2026-03-05 17:18:07 | 2026-03-05 22:47:09 | 2026-03-05 22:52:31 | 0:05:22 | 0:02:18 | 0:03:04 | vps | clyso-debian-13 | ubuntu | 22.04 | orch:cephadm:osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 |
Failure Reason:
Command failed on vm03 with status 1: "grep '^nvme_loop' /proc/modules || sudo modprobe nvme_loop && sudo mkdir -p /sys/kernel/config/nvmet/hosts/hostnqn && sudo mkdir -p /sys/kernel/config/nvmet/ports/1 && echo loop | sudo tee /sys/kernel/config/nvmet/ports/1/addr_trtype" |
||||||||||||||
| fail | 91 |
|
2026-03-05 17:18:01 | 2026-03-05 22:16:57 | 2026-03-05 22:28:39 | 0:11:42 | 0:08:50 | 0:02:52 | vps | clyso-debian-13 | centos | 9.stream | orch:cephadm:osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/repave-all} | 2 |
Failure Reason:
"2026-03-05T22:24:15.620263+0000 mon.vm06 (mon.0) 492 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log |
||||||||||||||
| fail | 92 |
|
2026-03-05 17:18:01 | 2026-03-05 22:18:38 | 2026-03-05 22:29:47 | 0:11:09 | 0:08:43 | 0:02:26 | vps | clyso-debian-13 | centos | 9.stream | orch:cephadm:osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 |
Failure Reason:
"2026-03-05T22:25:02.513132+0000 mon.vm00 (mon.0) 497 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log |
||||||||||||||
| fail | 93 |
|
2026-03-05 17:18:02 | 2026-03-05 22:19:46 | 2026-03-05 22:26:08 | 0:06:22 | 0:02:12 | 0:04:10 | vps | clyso-debian-13 | ubuntu | 22.04 | orch:cephadm:osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 |
Failure Reason:
Command failed on vm02 with status 1: "grep '^nvme_loop' /proc/modules || sudo modprobe nvme_loop && sudo mkdir -p /sys/kernel/config/nvmet/hosts/hostnqn && sudo mkdir -p /sys/kernel/config/nvmet/ports/1 && echo loop | sudo tee /sys/kernel/config/nvmet/ports/1/addr_trtype" |
||||||||||||||
| fail | 94 |
|
2026-03-05 17:18:02 | 2026-03-05 22:22:07 | 2026-03-05 22:33:23 | 0:11:16 | 0:08:19 | 0:02:57 | vps | clyso-debian-13 | centos | 9.stream | orch:cephadm:osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 |
Failure Reason:
"2026-03-05T22:28:25.815223+0000 mon.vm03 (mon.0) 499 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log |
||||||||||||||
| fail | 95 |
|
2026-03-05 17:18:03 | 2026-03-05 22:23:22 | 2026-03-05 22:37:20 | 0:13:58 | 0:08:30 | 0:05:28 | vps | clyso-debian-13 | centos | 9.stream | orch:cephadm:osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 |
Failure Reason:
"2026-03-05T22:32:24.623279+0000 mon.vm02 (mon.0) 496 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log |
||||||||||||||
| fail | 96 |
|
2026-03-05 17:18:03 | 2026-03-05 22:27:19 | 2026-03-05 22:33:46 | 0:06:27 | 0:02:02 | 0:04:25 | vps | clyso-debian-13 | ubuntu | 22.04 | orch:cephadm:osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/repave-all} | 2 |
Failure Reason:
Command failed on vm06 with status 1: "grep '^nvme_loop' /proc/modules || sudo modprobe nvme_loop && sudo mkdir -p /sys/kernel/config/nvmet/hosts/hostnqn && sudo mkdir -p /sys/kernel/config/nvmet/ports/1 && echo loop | sudo tee /sys/kernel/config/nvmet/ports/1/addr_trtype" |
||||||||||||||
| fail | 97 |
|
2026-03-05 17:18:04 | 2026-03-05 22:29:46 | 2026-03-05 22:41:31 | 0:11:45 | 0:08:09 | 0:03:36 | vps | clyso-debian-13 | centos | 9.stream | orch:cephadm:osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 |
Failure Reason:
"2026-03-05T22:36:23.612111+0000 mon.vm00 (mon.0) 499 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log |
||||||||||||||
| fail | 98 |
|
2026-03-05 17:18:04 | 2026-03-05 22:31:30 | 2026-03-05 22:47:13 | 0:15:43 | 0:10:05 | 0:05:38 | vps | clyso-debian-13 | centos | 9.stream | orch:cephadm:osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 |
Failure Reason:
"2026-03-05T22:41:45.425799+0000 mon.vm03 (mon.0) 501 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log |
||||||||||||||
| fail | 99 |
|
2026-03-05 17:18:04 | 2026-03-05 22:35:12 | 2026-03-05 22:40:24 | 0:05:12 | 0:02:21 | 0:02:51 | vps | clyso-debian-13 | ubuntu | 22.04 | orch:cephadm:osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 |
Failure Reason:
Command failed on vm06 with status 1: "grep '^nvme_loop' /proc/modules || sudo modprobe nvme_loop && sudo mkdir -p /sys/kernel/config/nvmet/hosts/hostnqn && sudo mkdir -p /sys/kernel/config/nvmet/ports/1 && echo loop | sudo tee /sys/kernel/config/nvmet/ports/1/addr_trtype" |
||||||||||||||