2026-03-10T09:52:02.904 INFO:root:teuthology version: 1.2.4.dev6+g1c580df7a 2026-03-10T09:52:02.908 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-10T09:52:02.930 INFO:teuthology.run:Config: archive_path: /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/990 branch: squid description: orch/cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_rgw_multisite} email: null first_in_suite: false flavor: default job_id: '990' last_in_suite: false machine_type: vps name: kyr-2026-03-10_01:00:38-orch-squid-none-default-vps no_nested_subset: false os_type: centos os_version: 9.stream overrides: admin_socket: branch: squid ansible.cephlab: branch: main skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: timezone: UTC ceph: conf: global: mon election default strategy: 1 mgr: debug mgr: 20 debug ms: 1 mgr/cephadm/use_agent: false mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: debug ms: 1 debug osd: 20 osd mclock iops capacity threshold hdd: 49000 flavor: default log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) - MON_DOWN - mons down - mon down - out of quorum - CEPHADM_STRAY_DAEMON log-only-match: - CEPHADM_ sha1: e911bdebe5c8faa3800735d1568fcdca65db60df ceph-deploy: conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: {} install: ceph: flavor: default sha1: e911bdebe5c8faa3800735d1568fcdca65db60df extra_system_packages: deb: - python3-xmltodict - python3-jmespath rpm: - bzip2 - perl-Test-Harness - python3-xmltodict - python3-jmespath selinux: allowlist: - scontext=system_u:system_r:logrotate_t:s0 workunit: branch: tt-squid sha1: 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b owner: kyr priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - host.a - mon.a - mgr.a - osd.0 - - host.b - mon.b - mgr.b - osd.1 - - host.c - mon.c - osd.2 seed: 8043 sha1: e911bdebe5c8faa3800735d1568fcdca65db60df sleep_before_teardown: 0 subset: 1/64 suite: orch suite_branch: tt-squid suite_path: /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa suite_relpath: qa suite_repo: https://github.com/kshtsk/ceph.git suite_sha1: 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b targets: vm01.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLsGYkptwdCG2aVTvAp/z6biuXRm7mGwPrcnq+wQwpGt7kY5C7g/ymQbiZ4nOj/lDzzdo+CZg+pBpAfDHi84Ono= vm02.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGn9kVRgzyX7apeugW3Svxoy7RsmtBmitID72i6QC3uFITGDe6HFz1MgW75P4uaj4ZUoJBcVVClWyd+pIWn4p/I= vm08.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC59q7gARiQS2s6EWclSd9qx+TskkvdBEQk34l6i+Tp6BmV7imdZVFa+GbCI1QlTOrlkAMpLmK18unkLd3hhtVY= tasks: - pexec: all: - sudo dnf remove nvme-cli -y - sudo dnf install runc nvmetcli nvme-cli -y - sudo sed -i 's/^#runtime = "crun"/runtime = "runc"/g' /usr/share/containers/containers.conf - sudo sed -i 's/runtime = "crun"/#runtime = "crun"/g' /usr/share/containers/containers.conf - install: null - cephadm: null - cephadm.shell: host.a: - ceph mgr module enable rgw - rgw_module.apply: specs: - rgw_realm: myrealm1 rgw_zone: myzone1 rgw_zonegroup: myzonegroup1 spec: rgw_frontend_port: 5500 - cephadm.shell: host.a: - 'set -e set -x while true; do TOKEN=$(ceph rgw realm tokens | jq -r ''.[0].token''); echo $TOKEN; if [ "$TOKEN" != "master zone has no endpoint" ]; then break; fi; sleep 5; done TOKENS=$(ceph rgw realm tokens) echo $TOKENS | jq --exit-status ''.[0].realm == "myrealm1"'' echo $TOKENS | jq --exit-status ''.[0].token'' TOKEN_JSON=$(ceph rgw realm tokens | jq -r ''.[0].token'' | base64 --decode) echo $TOKEN_JSON | jq --exit-status ''.realm_name == "myrealm1"'' echo $TOKEN_JSON | jq --exit-status ''.endpoint | test("http://.+:\\d+")'' echo $TOKEN_JSON | jq --exit-status ''.realm_id | test("^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$")'' echo $TOKEN_JSON | jq --exit-status ''.access_key'' echo $TOKEN_JSON | jq --exit-status ''.secret'' ' teuthology: fragments_dropped: [] meta: {} postmerge: [] teuthology_branch: clyso-debian-13 teuthology_repo: https://github.com/clyso/teuthology teuthology_sha1: 1c580df7a9c7c2aadc272da296344fd99f27c444 timestamp: 2026-03-10_01:00:38 tube: vps user: kyr verbose: false worker_log: /home/teuthos/.teuthology/dispatcher/dispatcher.vps.611473 2026-03-10T09:52:02.930 INFO:teuthology.run:suite_path is set to /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa; will attempt to use it 2026-03-10T09:52:02.930 INFO:teuthology.run:Found tasks at /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks 2026-03-10T09:52:02.930 INFO:teuthology.run_tasks:Running task internal.check_packages... 2026-03-10T09:52:02.931 INFO:teuthology.task.internal:Checking packages... 2026-03-10T09:52:02.931 INFO:teuthology.task.internal:Checking packages for os_type 'centos', flavor 'default' and ceph hash 'e911bdebe5c8faa3800735d1568fcdca65db60df' 2026-03-10T09:52:02.931 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-10T09:52:02.931 INFO:teuthology.packaging:ref: None 2026-03-10T09:52:02.931 INFO:teuthology.packaging:tag: None 2026-03-10T09:52:02.931 INFO:teuthology.packaging:branch: squid 2026-03-10T09:52:02.931 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T09:52:02.931 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&ref=squid 2026-03-10T09:52:03.760 INFO:teuthology.task.internal:Found packages for ceph version 19.2.3-678.ge911bdeb 2026-03-10T09:52:03.761 INFO:teuthology.run_tasks:Running task internal.buildpackages_prep... 2026-03-10T09:52:03.761 INFO:teuthology.task.internal:no buildpackages task found 2026-03-10T09:52:03.761 INFO:teuthology.run_tasks:Running task internal.save_config... 2026-03-10T09:52:03.762 INFO:teuthology.task.internal:Saving configuration 2026-03-10T09:52:03.766 INFO:teuthology.run_tasks:Running task internal.check_lock... 2026-03-10T09:52:03.767 INFO:teuthology.task.internal.check_lock:Checking locks... 2026-03-10T09:52:03.773 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm01.local', 'description': '/archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/990', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'centos', 'os_version': '9.stream', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-10 09:50:23.961253', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:01', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLsGYkptwdCG2aVTvAp/z6biuXRm7mGwPrcnq+wQwpGt7kY5C7g/ymQbiZ4nOj/lDzzdo+CZg+pBpAfDHi84Ono='} 2026-03-10T09:52:03.778 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm02.local', 'description': '/archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/990', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'centos', 'os_version': '9.stream', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-10 09:50:23.960636', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:02', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGn9kVRgzyX7apeugW3Svxoy7RsmtBmitID72i6QC3uFITGDe6HFz1MgW75P4uaj4ZUoJBcVVClWyd+pIWn4p/I='} 2026-03-10T09:52:03.783 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm08.local', 'description': '/archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/990', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'centos', 'os_version': '9.stream', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-10 09:50:23.961032', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:08', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC59q7gARiQS2s6EWclSd9qx+TskkvdBEQk34l6i+Tp6BmV7imdZVFa+GbCI1QlTOrlkAMpLmK18unkLd3hhtVY='} 2026-03-10T09:52:03.783 INFO:teuthology.run_tasks:Running task internal.add_remotes... 2026-03-10T09:52:03.784 INFO:teuthology.task.internal:roles: ubuntu@vm01.local - ['host.a', 'mon.a', 'mgr.a', 'osd.0'] 2026-03-10T09:52:03.784 INFO:teuthology.task.internal:roles: ubuntu@vm02.local - ['host.b', 'mon.b', 'mgr.b', 'osd.1'] 2026-03-10T09:52:03.784 INFO:teuthology.task.internal:roles: ubuntu@vm08.local - ['host.c', 'mon.c', 'osd.2'] 2026-03-10T09:52:03.784 INFO:teuthology.run_tasks:Running task console_log... 2026-03-10T09:52:03.789 DEBUG:teuthology.task.console_log:vm01 does not support IPMI; excluding 2026-03-10T09:52:03.793 DEBUG:teuthology.task.console_log:vm02 does not support IPMI; excluding 2026-03-10T09:52:03.798 DEBUG:teuthology.task.console_log:vm08 does not support IPMI; excluding 2026-03-10T09:52:03.798 DEBUG:teuthology.exit:Installing handler: Handler(exiter=, func=.kill_console_loggers at 0x7fa5e0372170>, signals=[15]) 2026-03-10T09:52:03.798 INFO:teuthology.run_tasks:Running task internal.connect... 2026-03-10T09:52:03.799 INFO:teuthology.task.internal:Opening connections... 2026-03-10T09:52:03.799 DEBUG:teuthology.task.internal:connecting to ubuntu@vm01.local 2026-03-10T09:52:03.799 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm01.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T09:52:03.855 DEBUG:teuthology.task.internal:connecting to ubuntu@vm02.local 2026-03-10T09:52:03.856 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm02.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T09:52:03.915 DEBUG:teuthology.task.internal:connecting to ubuntu@vm08.local 2026-03-10T09:52:03.915 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm08.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T09:52:03.976 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-03-10T09:52:03.977 DEBUG:teuthology.orchestra.run.vm01:> uname -m 2026-03-10T09:52:03.995 INFO:teuthology.orchestra.run.vm01.stdout:x86_64 2026-03-10T09:52:03.995 DEBUG:teuthology.orchestra.run.vm01:> cat /etc/os-release 2026-03-10T09:52:04.053 INFO:teuthology.orchestra.run.vm01.stdout:NAME="CentOS Stream" 2026-03-10T09:52:04.053 INFO:teuthology.orchestra.run.vm01.stdout:VERSION="9" 2026-03-10T09:52:04.053 INFO:teuthology.orchestra.run.vm01.stdout:ID="centos" 2026-03-10T09:52:04.053 INFO:teuthology.orchestra.run.vm01.stdout:ID_LIKE="rhel fedora" 2026-03-10T09:52:04.053 INFO:teuthology.orchestra.run.vm01.stdout:VERSION_ID="9" 2026-03-10T09:52:04.054 INFO:teuthology.orchestra.run.vm01.stdout:PLATFORM_ID="platform:el9" 2026-03-10T09:52:04.054 INFO:teuthology.orchestra.run.vm01.stdout:PRETTY_NAME="CentOS Stream 9" 2026-03-10T09:52:04.054 INFO:teuthology.orchestra.run.vm01.stdout:ANSI_COLOR="0;31" 2026-03-10T09:52:04.054 INFO:teuthology.orchestra.run.vm01.stdout:LOGO="fedora-logo-icon" 2026-03-10T09:52:04.054 INFO:teuthology.orchestra.run.vm01.stdout:CPE_NAME="cpe:/o:centos:centos:9" 2026-03-10T09:52:04.054 INFO:teuthology.orchestra.run.vm01.stdout:HOME_URL="https://centos.org/" 2026-03-10T09:52:04.054 INFO:teuthology.orchestra.run.vm01.stdout:BUG_REPORT_URL="https://issues.redhat.com/" 2026-03-10T09:52:04.054 INFO:teuthology.orchestra.run.vm01.stdout:REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9" 2026-03-10T09:52:04.054 INFO:teuthology.orchestra.run.vm01.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream" 2026-03-10T09:52:04.054 INFO:teuthology.lock.ops:Updating vm01.local on lock server 2026-03-10T09:52:04.059 DEBUG:teuthology.orchestra.run.vm02:> uname -m 2026-03-10T09:52:04.077 INFO:teuthology.orchestra.run.vm02.stdout:x86_64 2026-03-10T09:52:04.077 DEBUG:teuthology.orchestra.run.vm02:> cat /etc/os-release 2026-03-10T09:52:04.133 INFO:teuthology.orchestra.run.vm02.stdout:NAME="CentOS Stream" 2026-03-10T09:52:04.133 INFO:teuthology.orchestra.run.vm02.stdout:VERSION="9" 2026-03-10T09:52:04.133 INFO:teuthology.orchestra.run.vm02.stdout:ID="centos" 2026-03-10T09:52:04.133 INFO:teuthology.orchestra.run.vm02.stdout:ID_LIKE="rhel fedora" 2026-03-10T09:52:04.133 INFO:teuthology.orchestra.run.vm02.stdout:VERSION_ID="9" 2026-03-10T09:52:04.133 INFO:teuthology.orchestra.run.vm02.stdout:PLATFORM_ID="platform:el9" 2026-03-10T09:52:04.133 INFO:teuthology.orchestra.run.vm02.stdout:PRETTY_NAME="CentOS Stream 9" 2026-03-10T09:52:04.133 INFO:teuthology.orchestra.run.vm02.stdout:ANSI_COLOR="0;31" 2026-03-10T09:52:04.133 INFO:teuthology.orchestra.run.vm02.stdout:LOGO="fedora-logo-icon" 2026-03-10T09:52:04.133 INFO:teuthology.orchestra.run.vm02.stdout:CPE_NAME="cpe:/o:centos:centos:9" 2026-03-10T09:52:04.133 INFO:teuthology.orchestra.run.vm02.stdout:HOME_URL="https://centos.org/" 2026-03-10T09:52:04.133 INFO:teuthology.orchestra.run.vm02.stdout:BUG_REPORT_URL="https://issues.redhat.com/" 2026-03-10T09:52:04.133 INFO:teuthology.orchestra.run.vm02.stdout:REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9" 2026-03-10T09:52:04.133 INFO:teuthology.orchestra.run.vm02.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream" 2026-03-10T09:52:04.134 INFO:teuthology.lock.ops:Updating vm02.local on lock server 2026-03-10T09:52:04.138 DEBUG:teuthology.orchestra.run.vm08:> uname -m 2026-03-10T09:52:04.155 INFO:teuthology.orchestra.run.vm08.stdout:x86_64 2026-03-10T09:52:04.155 DEBUG:teuthology.orchestra.run.vm08:> cat /etc/os-release 2026-03-10T09:52:04.213 INFO:teuthology.orchestra.run.vm08.stdout:NAME="CentOS Stream" 2026-03-10T09:52:04.213 INFO:teuthology.orchestra.run.vm08.stdout:VERSION="9" 2026-03-10T09:52:04.213 INFO:teuthology.orchestra.run.vm08.stdout:ID="centos" 2026-03-10T09:52:04.213 INFO:teuthology.orchestra.run.vm08.stdout:ID_LIKE="rhel fedora" 2026-03-10T09:52:04.213 INFO:teuthology.orchestra.run.vm08.stdout:VERSION_ID="9" 2026-03-10T09:52:04.213 INFO:teuthology.orchestra.run.vm08.stdout:PLATFORM_ID="platform:el9" 2026-03-10T09:52:04.213 INFO:teuthology.orchestra.run.vm08.stdout:PRETTY_NAME="CentOS Stream 9" 2026-03-10T09:52:04.213 INFO:teuthology.orchestra.run.vm08.stdout:ANSI_COLOR="0;31" 2026-03-10T09:52:04.213 INFO:teuthology.orchestra.run.vm08.stdout:LOGO="fedora-logo-icon" 2026-03-10T09:52:04.213 INFO:teuthology.orchestra.run.vm08.stdout:CPE_NAME="cpe:/o:centos:centos:9" 2026-03-10T09:52:04.213 INFO:teuthology.orchestra.run.vm08.stdout:HOME_URL="https://centos.org/" 2026-03-10T09:52:04.213 INFO:teuthology.orchestra.run.vm08.stdout:BUG_REPORT_URL="https://issues.redhat.com/" 2026-03-10T09:52:04.213 INFO:teuthology.orchestra.run.vm08.stdout:REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9" 2026-03-10T09:52:04.213 INFO:teuthology.orchestra.run.vm08.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream" 2026-03-10T09:52:04.213 INFO:teuthology.lock.ops:Updating vm08.local on lock server 2026-03-10T09:52:04.217 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-03-10T09:52:04.219 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-03-10T09:52:04.220 INFO:teuthology.task.internal:Checking for old test directory... 2026-03-10T09:52:04.220 DEBUG:teuthology.orchestra.run.vm01:> test '!' -e /home/ubuntu/cephtest 2026-03-10T09:52:04.223 DEBUG:teuthology.orchestra.run.vm02:> test '!' -e /home/ubuntu/cephtest 2026-03-10T09:52:04.224 DEBUG:teuthology.orchestra.run.vm08:> test '!' -e /home/ubuntu/cephtest 2026-03-10T09:52:04.269 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-03-10T09:52:04.270 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-03-10T09:52:04.270 DEBUG:teuthology.orchestra.run.vm01:> test -z $(ls -A /var/lib/ceph) 2026-03-10T09:52:04.279 DEBUG:teuthology.orchestra.run.vm02:> test -z $(ls -A /var/lib/ceph) 2026-03-10T09:52:04.281 DEBUG:teuthology.orchestra.run.vm08:> test -z $(ls -A /var/lib/ceph) 2026-03-10T09:52:04.294 INFO:teuthology.orchestra.run.vm02.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-10T09:52:04.296 INFO:teuthology.orchestra.run.vm01.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-10T09:52:04.325 INFO:teuthology.orchestra.run.vm08.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-10T09:52:04.325 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-03-10T09:52:04.333 DEBUG:teuthology.orchestra.run.vm01:> test -e /ceph-qa-ready 2026-03-10T09:52:04.355 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T09:52:04.540 DEBUG:teuthology.orchestra.run.vm02:> test -e /ceph-qa-ready 2026-03-10T09:52:04.555 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T09:52:04.799 DEBUG:teuthology.orchestra.run.vm08:> test -e /ceph-qa-ready 2026-03-10T09:52:04.813 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T09:52:04.996 INFO:teuthology.run_tasks:Running task internal.base... 2026-03-10T09:52:04.997 INFO:teuthology.task.internal:Creating test directory... 2026-03-10T09:52:04.997 DEBUG:teuthology.orchestra.run.vm01:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-10T09:52:04.999 DEBUG:teuthology.orchestra.run.vm02:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-10T09:52:05.001 DEBUG:teuthology.orchestra.run.vm08:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-10T09:52:05.018 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-03-10T09:52:05.020 INFO:teuthology.run_tasks:Running task internal.archive... 2026-03-10T09:52:05.021 INFO:teuthology.task.internal:Creating archive directory... 2026-03-10T09:52:05.021 DEBUG:teuthology.orchestra.run.vm01:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-10T09:52:05.055 DEBUG:teuthology.orchestra.run.vm02:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-10T09:52:05.058 DEBUG:teuthology.orchestra.run.vm08:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-10T09:52:05.078 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-03-10T09:52:05.079 INFO:teuthology.task.internal:Enabling coredump saving... 2026-03-10T09:52:05.079 DEBUG:teuthology.orchestra.run.vm01:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-10T09:52:05.124 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T09:52:05.125 DEBUG:teuthology.orchestra.run.vm02:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-10T09:52:05.138 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T09:52:05.138 DEBUG:teuthology.orchestra.run.vm08:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-10T09:52:05.153 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T09:52:05.153 DEBUG:teuthology.orchestra.run.vm01:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-10T09:52:05.167 DEBUG:teuthology.orchestra.run.vm02:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-10T09:52:05.180 DEBUG:teuthology.orchestra.run.vm08:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-10T09:52:05.191 INFO:teuthology.orchestra.run.vm01.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T09:52:05.202 INFO:teuthology.orchestra.run.vm01.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T09:52:05.204 INFO:teuthology.orchestra.run.vm02.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T09:52:05.214 INFO:teuthology.orchestra.run.vm02.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T09:52:05.218 INFO:teuthology.orchestra.run.vm08.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T09:52:05.227 INFO:teuthology.orchestra.run.vm08.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T09:52:05.229 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-03-10T09:52:05.230 INFO:teuthology.task.internal:Configuring sudo... 2026-03-10T09:52:05.230 DEBUG:teuthology.orchestra.run.vm01:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-10T09:52:05.246 DEBUG:teuthology.orchestra.run.vm02:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-10T09:52:05.258 DEBUG:teuthology.orchestra.run.vm08:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-10T09:52:05.292 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-03-10T09:52:05.294 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-03-10T09:52:05.294 DEBUG:teuthology.orchestra.run.vm01:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-10T09:52:05.310 DEBUG:teuthology.orchestra.run.vm02:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-10T09:52:05.327 DEBUG:teuthology.orchestra.run.vm08:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-10T09:52:05.348 DEBUG:teuthology.orchestra.run.vm01:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T09:52:05.387 DEBUG:teuthology.orchestra.run.vm01:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T09:52:05.442 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T09:52:05.443 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-10T09:52:05.500 DEBUG:teuthology.orchestra.run.vm02:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T09:52:05.521 DEBUG:teuthology.orchestra.run.vm02:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T09:52:05.580 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T09:52:05.580 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-10T09:52:05.638 DEBUG:teuthology.orchestra.run.vm08:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T09:52:05.663 DEBUG:teuthology.orchestra.run.vm08:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T09:52:05.724 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T09:52:05.724 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-10T09:52:05.779 DEBUG:teuthology.orchestra.run.vm01:> sudo service rsyslog restart 2026-03-10T09:52:05.781 DEBUG:teuthology.orchestra.run.vm02:> sudo service rsyslog restart 2026-03-10T09:52:05.783 DEBUG:teuthology.orchestra.run.vm08:> sudo service rsyslog restart 2026-03-10T09:52:05.809 INFO:teuthology.orchestra.run.vm01.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T09:52:05.809 INFO:teuthology.orchestra.run.vm02.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T09:52:05.846 INFO:teuthology.orchestra.run.vm08.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T09:52:06.204 INFO:teuthology.run_tasks:Running task internal.timer... 2026-03-10T09:52:06.206 INFO:teuthology.task.internal:Starting timer... 2026-03-10T09:52:06.206 INFO:teuthology.run_tasks:Running task pcp... 2026-03-10T09:52:06.217 INFO:teuthology.run_tasks:Running task selinux... 2026-03-10T09:52:06.219 DEBUG:teuthology.task:Applying overrides for task selinux: {'allowlist': ['scontext=system_u:system_r:logrotate_t:s0']} 2026-03-10T09:52:06.219 INFO:teuthology.task.selinux:Excluding vm01: VMs are not yet supported 2026-03-10T09:52:06.219 INFO:teuthology.task.selinux:Excluding vm02: VMs are not yet supported 2026-03-10T09:52:06.219 INFO:teuthology.task.selinux:Excluding vm08: VMs are not yet supported 2026-03-10T09:52:06.219 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-03-10T09:52:06.219 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-03-10T09:52:06.219 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-03-10T09:52:06.219 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-03-10T09:52:06.221 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'timezone': 'UTC'}} 2026-03-10T09:52:06.221 DEBUG:teuthology.repo_utils:Setting repo remote to https://github.com/ceph/ceph-cm-ansible.git 2026-03-10T09:52:06.222 INFO:teuthology.repo_utils:Fetching github.com_ceph_ceph-cm-ansible_main from origin 2026-03-10T09:52:06.718 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main to origin/main 2026-03-10T09:52:06.724 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-03-10T09:52:06.724 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "timezone": "UTC"}' -i /tmp/teuth_ansible_inventorypw2cpp28 --limit vm01.local,vm02.local,vm08.local /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-03-10T09:54:16.158 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm01.local'), Remote(name='ubuntu@vm02.local'), Remote(name='ubuntu@vm08.local')] 2026-03-10T09:54:16.159 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm01.local' 2026-03-10T09:54:16.159 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm01.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T09:54:16.228 DEBUG:teuthology.orchestra.run.vm01:> true 2026-03-10T09:54:16.307 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm01.local' 2026-03-10T09:54:16.307 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm02.local' 2026-03-10T09:54:16.308 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm02.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T09:54:16.373 DEBUG:teuthology.orchestra.run.vm02:> true 2026-03-10T09:54:16.451 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm02.local' 2026-03-10T09:54:16.451 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm08.local' 2026-03-10T09:54:16.451 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm08.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T09:54:16.511 DEBUG:teuthology.orchestra.run.vm08:> true 2026-03-10T09:54:16.596 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm08.local' 2026-03-10T09:54:16.596 INFO:teuthology.run_tasks:Running task clock... 2026-03-10T09:54:16.600 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-03-10T09:54:16.600 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-10T09:54:16.600 DEBUG:teuthology.orchestra.run.vm01:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T09:54:16.602 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-10T09:54:16.602 DEBUG:teuthology.orchestra.run.vm02:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T09:54:16.604 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-10T09:54:16.604 DEBUG:teuthology.orchestra.run.vm08:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T09:54:16.648 INFO:teuthology.orchestra.run.vm01.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-03-10T09:54:16.655 INFO:teuthology.orchestra.run.vm02.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-03-10T09:54:16.665 INFO:teuthology.orchestra.run.vm01.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-03-10T09:54:16.675 INFO:teuthology.orchestra.run.vm02.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-03-10T09:54:16.677 INFO:teuthology.orchestra.run.vm08.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-03-10T09:54:16.693 INFO:teuthology.orchestra.run.vm08.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-03-10T09:54:16.702 INFO:teuthology.orchestra.run.vm01.stderr:sudo: ntpd: command not found 2026-03-10T09:54:16.713 INFO:teuthology.orchestra.run.vm02.stderr:sudo: ntpd: command not found 2026-03-10T09:54:16.717 INFO:teuthology.orchestra.run.vm01.stdout:506 Cannot talk to daemon 2026-03-10T09:54:16.721 INFO:teuthology.orchestra.run.vm08.stderr:sudo: ntpd: command not found 2026-03-10T09:54:16.730 INFO:teuthology.orchestra.run.vm02.stdout:506 Cannot talk to daemon 2026-03-10T09:54:16.732 INFO:teuthology.orchestra.run.vm08.stdout:506 Cannot talk to daemon 2026-03-10T09:54:16.736 INFO:teuthology.orchestra.run.vm01.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-03-10T09:54:16.746 INFO:teuthology.orchestra.run.vm08.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-03-10T09:54:16.749 INFO:teuthology.orchestra.run.vm02.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-03-10T09:54:16.752 INFO:teuthology.orchestra.run.vm01.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-03-10T09:54:16.759 INFO:teuthology.orchestra.run.vm08.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-03-10T09:54:16.773 INFO:teuthology.orchestra.run.vm02.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-03-10T09:54:16.812 INFO:teuthology.orchestra.run.vm01.stderr:bash: line 1: ntpq: command not found 2026-03-10T09:54:16.812 INFO:teuthology.orchestra.run.vm08.stderr:bash: line 1: ntpq: command not found 2026-03-10T09:54:16.816 INFO:teuthology.orchestra.run.vm01.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T09:54:16.816 INFO:teuthology.orchestra.run.vm01.stdout:=============================================================================== 2026-03-10T09:54:16.816 INFO:teuthology.orchestra.run.vm01.stdout:^? 141.84.43.73 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T09:54:16.816 INFO:teuthology.orchestra.run.vm01.stdout:^? static.119.109.140.128.c> 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T09:54:16.816 INFO:teuthology.orchestra.run.vm01.stdout:^? time2.sebhosting.de 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T09:54:16.816 INFO:teuthology.orchestra.run.vm01.stdout:^? mail.light-speed.de 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T09:54:16.817 INFO:teuthology.orchestra.run.vm08.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T09:54:16.817 INFO:teuthology.orchestra.run.vm08.stdout:=============================================================================== 2026-03-10T09:54:16.817 INFO:teuthology.orchestra.run.vm08.stdout:^? 141.84.43.73 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T09:54:16.817 INFO:teuthology.orchestra.run.vm08.stdout:^? static.119.109.140.128.c> 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T09:54:16.817 INFO:teuthology.orchestra.run.vm08.stdout:^? time2.sebhosting.de 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T09:54:16.817 INFO:teuthology.orchestra.run.vm08.stdout:^? mail.light-speed.de 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T09:54:16.826 INFO:teuthology.orchestra.run.vm02.stderr:bash: line 1: ntpq: command not found 2026-03-10T09:54:16.830 INFO:teuthology.orchestra.run.vm02.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T09:54:16.830 INFO:teuthology.orchestra.run.vm02.stdout:=============================================================================== 2026-03-10T09:54:16.830 INFO:teuthology.orchestra.run.vm02.stdout:^? static.119.109.140.128.c> 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T09:54:16.830 INFO:teuthology.orchestra.run.vm02.stdout:^? time2.sebhosting.de 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T09:54:16.830 INFO:teuthology.orchestra.run.vm02.stdout:^? mail.light-speed.de 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T09:54:16.830 INFO:teuthology.orchestra.run.vm02.stdout:^? 141.84.43.73 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T09:54:16.830 INFO:teuthology.run_tasks:Running task pexec... 2026-03-10T09:54:16.833 INFO:teuthology.task.pexec:Executing custom commands... 2026-03-10T09:54:16.833 DEBUG:teuthology.orchestra.run.vm01:> TESTDIR=/home/ubuntu/cephtest bash -s 2026-03-10T09:54:16.833 DEBUG:teuthology.orchestra.run.vm02:> TESTDIR=/home/ubuntu/cephtest bash -s 2026-03-10T09:54:16.833 DEBUG:teuthology.orchestra.run.vm08:> TESTDIR=/home/ubuntu/cephtest bash -s 2026-03-10T09:54:16.836 DEBUG:teuthology.task.pexec:ubuntu@vm01.local< sudo dnf remove nvme-cli -y 2026-03-10T09:54:16.836 DEBUG:teuthology.task.pexec:ubuntu@vm01.local< sudo dnf install runc nvmetcli nvme-cli -y 2026-03-10T09:54:16.836 DEBUG:teuthology.task.pexec:ubuntu@vm01.local< sudo sed -i 's/^#runtime = "crun"/runtime = "runc"/g' /usr/share/containers/containers.conf 2026-03-10T09:54:16.836 DEBUG:teuthology.task.pexec:ubuntu@vm01.local< sudo sed -i 's/runtime = "crun"/#runtime = "crun"/g' /usr/share/containers/containers.conf 2026-03-10T09:54:16.836 INFO:teuthology.task.pexec:Running commands on host ubuntu@vm01.local 2026-03-10T09:54:16.836 INFO:teuthology.task.pexec:sudo dnf remove nvme-cli -y 2026-03-10T09:54:16.836 INFO:teuthology.task.pexec:sudo dnf install runc nvmetcli nvme-cli -y 2026-03-10T09:54:16.836 INFO:teuthology.task.pexec:sudo sed -i 's/^#runtime = "crun"/runtime = "runc"/g' /usr/share/containers/containers.conf 2026-03-10T09:54:16.836 INFO:teuthology.task.pexec:sudo sed -i 's/runtime = "crun"/#runtime = "crun"/g' /usr/share/containers/containers.conf 2026-03-10T09:54:16.837 DEBUG:teuthology.task.pexec:ubuntu@vm02.local< sudo dnf remove nvme-cli -y 2026-03-10T09:54:16.837 DEBUG:teuthology.task.pexec:ubuntu@vm02.local< sudo dnf install runc nvmetcli nvme-cli -y 2026-03-10T09:54:16.837 DEBUG:teuthology.task.pexec:ubuntu@vm02.local< sudo sed -i 's/^#runtime = "crun"/runtime = "runc"/g' /usr/share/containers/containers.conf 2026-03-10T09:54:16.837 DEBUG:teuthology.task.pexec:ubuntu@vm02.local< sudo sed -i 's/runtime = "crun"/#runtime = "crun"/g' /usr/share/containers/containers.conf 2026-03-10T09:54:16.837 INFO:teuthology.task.pexec:Running commands on host ubuntu@vm02.local 2026-03-10T09:54:16.837 INFO:teuthology.task.pexec:sudo dnf remove nvme-cli -y 2026-03-10T09:54:16.837 INFO:teuthology.task.pexec:sudo dnf install runc nvmetcli nvme-cli -y 2026-03-10T09:54:16.837 INFO:teuthology.task.pexec:sudo sed -i 's/^#runtime = "crun"/runtime = "runc"/g' /usr/share/containers/containers.conf 2026-03-10T09:54:16.837 INFO:teuthology.task.pexec:sudo sed -i 's/runtime = "crun"/#runtime = "crun"/g' /usr/share/containers/containers.conf 2026-03-10T09:54:16.860 DEBUG:teuthology.task.pexec:ubuntu@vm08.local< sudo dnf remove nvme-cli -y 2026-03-10T09:54:16.860 DEBUG:teuthology.task.pexec:ubuntu@vm08.local< sudo dnf install runc nvmetcli nvme-cli -y 2026-03-10T09:54:16.860 DEBUG:teuthology.task.pexec:ubuntu@vm08.local< sudo sed -i 's/^#runtime = "crun"/runtime = "runc"/g' /usr/share/containers/containers.conf 2026-03-10T09:54:16.860 DEBUG:teuthology.task.pexec:ubuntu@vm08.local< sudo sed -i 's/runtime = "crun"/#runtime = "crun"/g' /usr/share/containers/containers.conf 2026-03-10T09:54:16.860 INFO:teuthology.task.pexec:Running commands on host ubuntu@vm08.local 2026-03-10T09:54:16.860 INFO:teuthology.task.pexec:sudo dnf remove nvme-cli -y 2026-03-10T09:54:16.860 INFO:teuthology.task.pexec:sudo dnf install runc nvmetcli nvme-cli -y 2026-03-10T09:54:16.860 INFO:teuthology.task.pexec:sudo sed -i 's/^#runtime = "crun"/runtime = "runc"/g' /usr/share/containers/containers.conf 2026-03-10T09:54:16.860 INFO:teuthology.task.pexec:sudo sed -i 's/runtime = "crun"/#runtime = "crun"/g' /usr/share/containers/containers.conf 2026-03-10T09:54:17.070 INFO:teuthology.orchestra.run.vm02.stdout:No match for argument: nvme-cli 2026-03-10T09:54:17.070 INFO:teuthology.orchestra.run.vm02.stderr:No packages marked for removal. 2026-03-10T09:54:17.074 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T09:54:17.075 INFO:teuthology.orchestra.run.vm02.stdout:Nothing to do. 2026-03-10T09:54:17.075 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T09:54:17.077 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: nvme-cli 2026-03-10T09:54:17.078 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T09:54:17.080 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: nvme-cli 2026-03-10T09:54:17.080 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T09:54:17.081 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T09:54:17.081 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T09:54:17.081 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T09:54:17.083 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T09:54:17.084 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T09:54:17.084 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T09:54:17.557 INFO:teuthology.orchestra.run.vm08.stdout:Last metadata expiration check: 0:01:22 ago on Tue 10 Mar 2026 09:52:55 AM UTC. 2026-03-10T09:54:17.627 INFO:teuthology.orchestra.run.vm02.stdout:Last metadata expiration check: 0:01:12 ago on Tue 10 Mar 2026 09:53:05 AM UTC. 2026-03-10T09:54:17.640 INFO:teuthology.orchestra.run.vm01.stdout:Last metadata expiration check: 0:01:23 ago on Tue 10 Mar 2026 09:52:54 AM UTC. 2026-03-10T09:54:17.697 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T09:54:17.697 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T09:54:17.697 INFO:teuthology.orchestra.run.vm08.stdout: Package Arch Version Repository Size 2026-03-10T09:54:17.697 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T09:54:17.697 INFO:teuthology.orchestra.run.vm08.stdout:Installing: 2026-03-10T09:54:17.697 INFO:teuthology.orchestra.run.vm08.stdout: nvme-cli x86_64 2.16-1.el9 baseos 1.2 M 2026-03-10T09:54:17.698 INFO:teuthology.orchestra.run.vm08.stdout: nvmetcli noarch 0.8-3.el9 baseos 44 k 2026-03-10T09:54:17.698 INFO:teuthology.orchestra.run.vm08.stdout: runc x86_64 4:1.4.0-2.el9 appstream 4.0 M 2026-03-10T09:54:17.698 INFO:teuthology.orchestra.run.vm08.stdout:Installing dependencies: 2026-03-10T09:54:17.698 INFO:teuthology.orchestra.run.vm08.stdout: python3-configshell noarch 1:1.1.30-1.el9 baseos 72 k 2026-03-10T09:54:17.698 INFO:teuthology.orchestra.run.vm08.stdout: python3-kmod x86_64 0.9-32.el9 baseos 84 k 2026-03-10T09:54:17.698 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyparsing noarch 2.4.7-9.el9 baseos 150 k 2026-03-10T09:54:17.698 INFO:teuthology.orchestra.run.vm08.stdout: python3-urwid x86_64 2.1.2-4.el9 baseos 837 k 2026-03-10T09:54:17.698 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:54:17.698 INFO:teuthology.orchestra.run.vm08.stdout:Transaction Summary 2026-03-10T09:54:17.698 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T09:54:17.698 INFO:teuthology.orchestra.run.vm08.stdout:Install 7 Packages 2026-03-10T09:54:17.698 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:54:17.698 INFO:teuthology.orchestra.run.vm08.stdout:Total download size: 6.3 M 2026-03-10T09:54:17.698 INFO:teuthology.orchestra.run.vm08.stdout:Installed size: 24 M 2026-03-10T09:54:17.698 INFO:teuthology.orchestra.run.vm08.stdout:Downloading Packages: 2026-03-10T09:54:17.742 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T09:54:17.743 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T09:54:17.743 INFO:teuthology.orchestra.run.vm01.stdout: Package Arch Version Repository Size 2026-03-10T09:54:17.743 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T09:54:17.743 INFO:teuthology.orchestra.run.vm01.stdout:Installing: 2026-03-10T09:54:17.743 INFO:teuthology.orchestra.run.vm01.stdout: nvme-cli x86_64 2.16-1.el9 baseos 1.2 M 2026-03-10T09:54:17.743 INFO:teuthology.orchestra.run.vm01.stdout: nvmetcli noarch 0.8-3.el9 baseos 44 k 2026-03-10T09:54:17.743 INFO:teuthology.orchestra.run.vm01.stdout: runc x86_64 4:1.4.0-2.el9 appstream 4.0 M 2026-03-10T09:54:17.743 INFO:teuthology.orchestra.run.vm01.stdout:Installing dependencies: 2026-03-10T09:54:17.743 INFO:teuthology.orchestra.run.vm01.stdout: python3-configshell noarch 1:1.1.30-1.el9 baseos 72 k 2026-03-10T09:54:17.743 INFO:teuthology.orchestra.run.vm01.stdout: python3-kmod x86_64 0.9-32.el9 baseos 84 k 2026-03-10T09:54:17.743 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyparsing noarch 2.4.7-9.el9 baseos 150 k 2026-03-10T09:54:17.743 INFO:teuthology.orchestra.run.vm01.stdout: python3-urwid x86_64 2.1.2-4.el9 baseos 837 k 2026-03-10T09:54:17.743 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:54:17.743 INFO:teuthology.orchestra.run.vm01.stdout:Transaction Summary 2026-03-10T09:54:17.743 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T09:54:17.743 INFO:teuthology.orchestra.run.vm01.stdout:Install 7 Packages 2026-03-10T09:54:17.743 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:54:17.743 INFO:teuthology.orchestra.run.vm01.stdout:Total download size: 6.3 M 2026-03-10T09:54:17.743 INFO:teuthology.orchestra.run.vm01.stdout:Installed size: 24 M 2026-03-10T09:54:17.743 INFO:teuthology.orchestra.run.vm01.stdout:Downloading Packages: 2026-03-10T09:54:17.763 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T09:54:17.764 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T09:54:17.764 INFO:teuthology.orchestra.run.vm02.stdout: Package Arch Version Repository Size 2026-03-10T09:54:17.764 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T09:54:17.764 INFO:teuthology.orchestra.run.vm02.stdout:Installing: 2026-03-10T09:54:17.764 INFO:teuthology.orchestra.run.vm02.stdout: nvme-cli x86_64 2.16-1.el9 baseos 1.2 M 2026-03-10T09:54:17.764 INFO:teuthology.orchestra.run.vm02.stdout: nvmetcli noarch 0.8-3.el9 baseos 44 k 2026-03-10T09:54:17.764 INFO:teuthology.orchestra.run.vm02.stdout: runc x86_64 4:1.4.0-2.el9 appstream 4.0 M 2026-03-10T09:54:17.764 INFO:teuthology.orchestra.run.vm02.stdout:Installing dependencies: 2026-03-10T09:54:17.764 INFO:teuthology.orchestra.run.vm02.stdout: python3-configshell noarch 1:1.1.30-1.el9 baseos 72 k 2026-03-10T09:54:17.764 INFO:teuthology.orchestra.run.vm02.stdout: python3-kmod x86_64 0.9-32.el9 baseos 84 k 2026-03-10T09:54:17.764 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyparsing noarch 2.4.7-9.el9 baseos 150 k 2026-03-10T09:54:17.764 INFO:teuthology.orchestra.run.vm02.stdout: python3-urwid x86_64 2.1.2-4.el9 baseos 837 k 2026-03-10T09:54:17.764 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:54:17.764 INFO:teuthology.orchestra.run.vm02.stdout:Transaction Summary 2026-03-10T09:54:17.764 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T09:54:17.764 INFO:teuthology.orchestra.run.vm02.stdout:Install 7 Packages 2026-03-10T09:54:17.764 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:54:17.764 INFO:teuthology.orchestra.run.vm02.stdout:Total download size: 6.3 M 2026-03-10T09:54:17.764 INFO:teuthology.orchestra.run.vm02.stdout:Installed size: 24 M 2026-03-10T09:54:17.764 INFO:teuthology.orchestra.run.vm02.stdout:Downloading Packages: 2026-03-10T09:54:18.435 INFO:teuthology.orchestra.run.vm08.stdout:(1/7): nvmetcli-0.8-3.el9.noarch.rpm 504 kB/s | 44 kB 00:00 2026-03-10T09:54:18.437 INFO:teuthology.orchestra.run.vm01.stdout:(1/7): nvmetcli-0.8-3.el9.noarch.rpm 341 kB/s | 44 kB 00:00 2026-03-10T09:54:18.437 INFO:teuthology.orchestra.run.vm01.stdout:(2/7): python3-configshell-1.1.30-1.el9.noarch. 554 kB/s | 72 kB 00:00 2026-03-10T09:54:18.445 INFO:teuthology.orchestra.run.vm08.stdout:(2/7): python3-configshell-1.1.30-1.el9.noarch. 741 kB/s | 72 kB 00:00 2026-03-10T09:54:18.449 INFO:teuthology.orchestra.run.vm02.stdout:(1/7): nvmetcli-0.8-3.el9.noarch.rpm 349 kB/s | 44 kB 00:00 2026-03-10T09:54:18.479 INFO:teuthology.orchestra.run.vm02.stdout:(2/7): python3-configshell-1.1.30-1.el9.noarch. 461 kB/s | 72 kB 00:00 2026-03-10T09:54:18.485 INFO:teuthology.orchestra.run.vm08.stdout:(3/7): python3-kmod-0.9-32.el9.x86_64.rpm 1.7 MB/s | 84 kB 00:00 2026-03-10T09:54:18.494 INFO:teuthology.orchestra.run.vm01.stdout:(3/7): python3-kmod-0.9-32.el9.x86_64.rpm 1.5 MB/s | 84 kB 00:00 2026-03-10T09:54:18.495 INFO:teuthology.orchestra.run.vm01.stdout:(4/7): python3-pyparsing-2.4.7-9.el9.noarch.rpm 2.6 MB/s | 150 kB 00:00 2026-03-10T09:54:18.504 INFO:teuthology.orchestra.run.vm08.stdout:(4/7): python3-pyparsing-2.4.7-9.el9.noarch.rpm 2.5 MB/s | 150 kB 00:00 2026-03-10T09:54:18.537 INFO:teuthology.orchestra.run.vm08.stdout:(5/7): nvme-cli-2.16-1.el9.x86_64.rpm 6.1 MB/s | 1.2 MB 00:00 2026-03-10T09:54:18.539 INFO:teuthology.orchestra.run.vm01.stdout:(5/7): nvme-cli-2.16-1.el9.x86_64.rpm 5.0 MB/s | 1.2 MB 00:00 2026-03-10T09:54:18.543 INFO:teuthology.orchestra.run.vm02.stdout:(3/7): python3-kmod-0.9-32.el9.x86_64.rpm 898 kB/s | 84 kB 00:00 2026-03-10T09:54:18.581 INFO:teuthology.orchestra.run.vm08.stdout:(6/7): python3-urwid-2.1.2-4.el9.x86_64.rpm 8.6 MB/s | 837 kB 00:00 2026-03-10T09:54:18.584 INFO:teuthology.orchestra.run.vm01.stdout:(6/7): python3-urwid-2.1.2-4.el9.x86_64.rpm 9.1 MB/s | 837 kB 00:00 2026-03-10T09:54:18.624 INFO:teuthology.orchestra.run.vm02.stdout:(4/7): python3-pyparsing-2.4.7-9.el9.noarch.rpm 1.0 MB/s | 150 kB 00:00 2026-03-10T09:54:18.702 INFO:teuthology.orchestra.run.vm01.stdout:(7/7): runc-1.4.0-2.el9.x86_64.rpm 19 MB/s | 4.0 MB 00:00 2026-03-10T09:54:18.703 INFO:teuthology.orchestra.run.vm01.stdout:-------------------------------------------------------------------------------- 2026-03-10T09:54:18.703 INFO:teuthology.orchestra.run.vm01.stdout:Total 6.5 MB/s | 6.3 MB 00:00 2026-03-10T09:54:18.787 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction check 2026-03-10T09:54:18.797 INFO:teuthology.orchestra.run.vm01.stdout:Transaction check succeeded. 2026-03-10T09:54:18.797 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction test 2026-03-10T09:54:18.867 INFO:teuthology.orchestra.run.vm01.stdout:Transaction test succeeded. 2026-03-10T09:54:18.868 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction 2026-03-10T09:54:19.034 INFO:teuthology.orchestra.run.vm01.stdout: Preparing : 1/1 2026-03-10T09:54:19.046 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-urwid-2.1.2-4.el9.x86_64 1/7 2026-03-10T09:54:19.058 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-pyparsing-2.4.7-9.el9.noarch 2/7 2026-03-10T09:54:19.067 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-configshell-1:1.1.30-1.el9.noarch 3/7 2026-03-10T09:54:19.075 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-kmod-0.9-32.el9.x86_64 4/7 2026-03-10T09:54:19.078 INFO:teuthology.orchestra.run.vm01.stdout: Installing : nvmetcli-0.8-3.el9.noarch 5/7 2026-03-10T09:54:19.137 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: nvmetcli-0.8-3.el9.noarch 5/7 2026-03-10T09:54:19.173 INFO:teuthology.orchestra.run.vm02.stdout:(5/7): python3-urwid-2.1.2-4.el9.x86_64.rpm 1.3 MB/s | 837 kB 00:00 2026-03-10T09:54:19.231 INFO:teuthology.orchestra.run.vm02.stdout:(6/7): runc-1.4.0-2.el9.x86_64.rpm 6.5 MB/s | 4.0 MB 00:00 2026-03-10T09:54:19.262 INFO:teuthology.orchestra.run.vm08.stdout:(7/7): runc-1.4.0-2.el9.x86_64.rpm 5.2 MB/s | 4.0 MB 00:00 2026-03-10T09:54:19.263 INFO:teuthology.orchestra.run.vm08.stdout:-------------------------------------------------------------------------------- 2026-03-10T09:54:19.263 INFO:teuthology.orchestra.run.vm08.stdout:Total 4.0 MB/s | 6.3 MB 00:01 2026-03-10T09:54:19.299 INFO:teuthology.orchestra.run.vm01.stdout: Installing : runc-4:1.4.0-2.el9.x86_64 6/7 2026-03-10T09:54:19.306 INFO:teuthology.orchestra.run.vm01.stdout: Installing : nvme-cli-2.16-1.el9.x86_64 7/7 2026-03-10T09:54:19.312 INFO:teuthology.orchestra.run.vm02.stdout:(7/7): nvme-cli-2.16-1.el9.x86_64.rpm 1.2 MB/s | 1.2 MB 00:00 2026-03-10T09:54:19.312 INFO:teuthology.orchestra.run.vm02.stdout:-------------------------------------------------------------------------------- 2026-03-10T09:54:19.312 INFO:teuthology.orchestra.run.vm02.stdout:Total 4.1 MB/s | 6.3 MB 00:01 2026-03-10T09:54:19.349 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction check 2026-03-10T09:54:19.360 INFO:teuthology.orchestra.run.vm08.stdout:Transaction check succeeded. 2026-03-10T09:54:19.360 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction test 2026-03-10T09:54:19.404 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction check 2026-03-10T09:54:19.416 INFO:teuthology.orchestra.run.vm02.stdout:Transaction check succeeded. 2026-03-10T09:54:19.416 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction test 2026-03-10T09:54:19.440 INFO:teuthology.orchestra.run.vm08.stdout:Transaction test succeeded. 2026-03-10T09:54:19.440 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction 2026-03-10T09:54:19.503 INFO:teuthology.orchestra.run.vm02.stdout:Transaction test succeeded. 2026-03-10T09:54:19.503 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction 2026-03-10T09:54:19.638 INFO:teuthology.orchestra.run.vm08.stdout: Preparing : 1/1 2026-03-10T09:54:19.651 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-urwid-2.1.2-4.el9.x86_64 1/7 2026-03-10T09:54:19.666 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-pyparsing-2.4.7-9.el9.noarch 2/7 2026-03-10T09:54:19.677 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-configshell-1:1.1.30-1.el9.noarch 3/7 2026-03-10T09:54:19.685 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-kmod-0.9-32.el9.x86_64 4/7 2026-03-10T09:54:19.690 INFO:teuthology.orchestra.run.vm08.stdout: Installing : nvmetcli-0.8-3.el9.noarch 5/7 2026-03-10T09:54:19.713 INFO:teuthology.orchestra.run.vm02.stdout: Preparing : 1/1 2026-03-10T09:54:19.716 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: nvme-cli-2.16-1.el9.x86_64 7/7 2026-03-10T09:54:19.716 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /usr/lib/systemd/system/nvmefc-boot-connections.service. 2026-03-10T09:54:19.716 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:54:19.728 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-urwid-2.1.2-4.el9.x86_64 1/7 2026-03-10T09:54:19.742 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-pyparsing-2.4.7-9.el9.noarch 2/7 2026-03-10T09:54:19.751 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-configshell-1:1.1.30-1.el9.noarch 3/7 2026-03-10T09:54:19.761 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-kmod-0.9-32.el9.x86_64 4/7 2026-03-10T09:54:19.763 INFO:teuthology.orchestra.run.vm02.stdout: Installing : nvmetcli-0.8-3.el9.noarch 5/7 2026-03-10T09:54:19.764 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: nvmetcli-0.8-3.el9.noarch 5/7 2026-03-10T09:54:19.832 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: nvmetcli-0.8-3.el9.noarch 5/7 2026-03-10T09:54:19.943 INFO:teuthology.orchestra.run.vm08.stdout: Installing : runc-4:1.4.0-2.el9.x86_64 6/7 2026-03-10T09:54:19.950 INFO:teuthology.orchestra.run.vm08.stdout: Installing : nvme-cli-2.16-1.el9.x86_64 7/7 2026-03-10T09:54:20.038 INFO:teuthology.orchestra.run.vm02.stdout: Installing : runc-4:1.4.0-2.el9.x86_64 6/7 2026-03-10T09:54:20.045 INFO:teuthology.orchestra.run.vm02.stdout: Installing : nvme-cli-2.16-1.el9.x86_64 7/7 2026-03-10T09:54:20.332 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : nvme-cli-2.16-1.el9.x86_64 1/7 2026-03-10T09:54:20.332 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : nvmetcli-0.8-3.el9.noarch 2/7 2026-03-10T09:54:20.332 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-configshell-1:1.1.30-1.el9.noarch 3/7 2026-03-10T09:54:20.332 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-kmod-0.9-32.el9.x86_64 4/7 2026-03-10T09:54:20.332 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.noarch 5/7 2026-03-10T09:54:20.332 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-urwid-2.1.2-4.el9.x86_64 6/7 2026-03-10T09:54:20.383 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: nvme-cli-2.16-1.el9.x86_64 7/7 2026-03-10T09:54:20.383 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /usr/lib/systemd/system/nvmefc-boot-connections.service. 2026-03-10T09:54:20.383 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:54:20.427 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : runc-4:1.4.0-2.el9.x86_64 7/7 2026-03-10T09:54:20.427 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:54:20.427 INFO:teuthology.orchestra.run.vm01.stdout:Installed: 2026-03-10T09:54:20.427 INFO:teuthology.orchestra.run.vm01.stdout: nvme-cli-2.16-1.el9.x86_64 nvmetcli-0.8-3.el9.noarch 2026-03-10T09:54:20.427 INFO:teuthology.orchestra.run.vm01.stdout: python3-configshell-1:1.1.30-1.el9.noarch python3-kmod-0.9-32.el9.x86_64 2026-03-10T09:54:20.427 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyparsing-2.4.7-9.el9.noarch python3-urwid-2.1.2-4.el9.x86_64 2026-03-10T09:54:20.427 INFO:teuthology.orchestra.run.vm01.stdout: runc-4:1.4.0-2.el9.x86_64 2026-03-10T09:54:20.427 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:54:20.427 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T09:54:20.494 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: nvme-cli-2.16-1.el9.x86_64 7/7 2026-03-10T09:54:20.494 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /usr/lib/systemd/system/nvmefc-boot-connections.service. 2026-03-10T09:54:20.494 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:54:20.562 DEBUG:teuthology.parallel:result is None 2026-03-10T09:54:21.040 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : nvme-cli-2.16-1.el9.x86_64 1/7 2026-03-10T09:54:21.040 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : nvmetcli-0.8-3.el9.noarch 2/7 2026-03-10T09:54:21.040 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-configshell-1:1.1.30-1.el9.noarch 3/7 2026-03-10T09:54:21.040 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-kmod-0.9-32.el9.x86_64 4/7 2026-03-10T09:54:21.040 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.noarch 5/7 2026-03-10T09:54:21.040 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-urwid-2.1.2-4.el9.x86_64 6/7 2026-03-10T09:54:21.168 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : runc-4:1.4.0-2.el9.x86_64 7/7 2026-03-10T09:54:21.168 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:54:21.168 INFO:teuthology.orchestra.run.vm08.stdout:Installed: 2026-03-10T09:54:21.168 INFO:teuthology.orchestra.run.vm08.stdout: nvme-cli-2.16-1.el9.x86_64 nvmetcli-0.8-3.el9.noarch 2026-03-10T09:54:21.168 INFO:teuthology.orchestra.run.vm08.stdout: python3-configshell-1:1.1.30-1.el9.noarch python3-kmod-0.9-32.el9.x86_64 2026-03-10T09:54:21.168 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyparsing-2.4.7-9.el9.noarch python3-urwid-2.1.2-4.el9.x86_64 2026-03-10T09:54:21.168 INFO:teuthology.orchestra.run.vm08.stdout: runc-4:1.4.0-2.el9.x86_64 2026-03-10T09:54:21.168 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:54:21.168 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T09:54:21.208 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : nvme-cli-2.16-1.el9.x86_64 1/7 2026-03-10T09:54:21.208 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : nvmetcli-0.8-3.el9.noarch 2/7 2026-03-10T09:54:21.208 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-configshell-1:1.1.30-1.el9.noarch 3/7 2026-03-10T09:54:21.208 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-kmod-0.9-32.el9.x86_64 4/7 2026-03-10T09:54:21.209 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.noarch 5/7 2026-03-10T09:54:21.209 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-urwid-2.1.2-4.el9.x86_64 6/7 2026-03-10T09:54:21.331 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : runc-4:1.4.0-2.el9.x86_64 7/7 2026-03-10T09:54:21.331 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:54:21.331 INFO:teuthology.orchestra.run.vm02.stdout:Installed: 2026-03-10T09:54:21.331 INFO:teuthology.orchestra.run.vm02.stdout: nvme-cli-2.16-1.el9.x86_64 nvmetcli-0.8-3.el9.noarch 2026-03-10T09:54:21.331 INFO:teuthology.orchestra.run.vm02.stdout: python3-configshell-1:1.1.30-1.el9.noarch python3-kmod-0.9-32.el9.x86_64 2026-03-10T09:54:21.331 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyparsing-2.4.7-9.el9.noarch python3-urwid-2.1.2-4.el9.x86_64 2026-03-10T09:54:21.331 INFO:teuthology.orchestra.run.vm02.stdout: runc-4:1.4.0-2.el9.x86_64 2026-03-10T09:54:21.331 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:54:21.331 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T09:54:21.340 DEBUG:teuthology.parallel:result is None 2026-03-10T09:54:21.431 DEBUG:teuthology.parallel:result is None 2026-03-10T09:54:21.431 INFO:teuthology.run_tasks:Running task install... 2026-03-10T09:54:21.433 DEBUG:teuthology.task.install:project ceph 2026-03-10T09:54:21.433 DEBUG:teuthology.task.install:INSTALL overrides: {'ceph': {'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'}, 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-10T09:54:21.433 DEBUG:teuthology.task.install:config {'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-10T09:54:21.433 INFO:teuthology.task.install:Using flavor: default 2026-03-10T09:54:21.435 DEBUG:teuthology.task.install:Package list is: {'deb': ['ceph', 'cephadm', 'ceph-mds', 'ceph-mgr', 'ceph-common', 'ceph-fuse', 'ceph-test', 'ceph-volume', 'radosgw', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'libcephfs2', 'libcephfs-dev', 'librados2', 'librbd1', 'rbd-fuse'], 'rpm': ['ceph-radosgw', 'ceph-test', 'ceph', 'ceph-base', 'cephadm', 'ceph-immutable-object-cache', 'ceph-mgr', 'ceph-mgr-dashboard', 'ceph-mgr-diskprediction-local', 'ceph-mgr-rook', 'ceph-mgr-cephadm', 'ceph-fuse', 'ceph-volume', 'librados-devel', 'libcephfs2', 'libcephfs-devel', 'librados2', 'librbd1', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'rbd-fuse', 'rbd-mirror', 'rbd-nbd']} 2026-03-10T09:54:21.435 INFO:teuthology.task.install:extra packages: [] 2026-03-10T09:54:21.435 DEBUG:teuthology.task.install.rpm:_update_package_list_and_install: config is {'branch': None, 'cleanup': None, 'debuginfo': None, 'downgrade_packages': [], 'exclude_packages': [], 'extra_packages': [], 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}, 'extras': None, 'enable_coprs': [], 'flavor': 'default', 'install_ceph_packages': True, 'packages': {}, 'project': 'ceph', 'repos_only': False, 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'tag': None, 'wait_for_package': False} 2026-03-10T09:54:21.435 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T09:54:21.436 DEBUG:teuthology.task.install.rpm:_update_package_list_and_install: config is {'branch': None, 'cleanup': None, 'debuginfo': None, 'downgrade_packages': [], 'exclude_packages': [], 'extra_packages': [], 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}, 'extras': None, 'enable_coprs': [], 'flavor': 'default', 'install_ceph_packages': True, 'packages': {}, 'project': 'ceph', 'repos_only': False, 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'tag': None, 'wait_for_package': False} 2026-03-10T09:54:21.436 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T09:54:21.436 DEBUG:teuthology.task.install.rpm:_update_package_list_and_install: config is {'branch': None, 'cleanup': None, 'debuginfo': None, 'downgrade_packages': [], 'exclude_packages': [], 'extra_packages': [], 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}, 'extras': None, 'enable_coprs': [], 'flavor': 'default', 'install_ceph_packages': True, 'packages': {}, 'project': 'ceph', 'repos_only': False, 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'tag': None, 'wait_for_package': False} 2026-03-10T09:54:21.436 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T09:54:22.038 INFO:teuthology.task.install.rpm:Pulling from https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/ 2026-03-10T09:54:22.038 INFO:teuthology.task.install.rpm:Package version is 19.2.3-678.ge911bdeb 2026-03-10T09:54:22.071 INFO:teuthology.task.install.rpm:Pulling from https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/ 2026-03-10T09:54:22.072 INFO:teuthology.task.install.rpm:Package version is 19.2.3-678.ge911bdeb 2026-03-10T09:54:22.101 INFO:teuthology.task.install.rpm:Pulling from https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/ 2026-03-10T09:54:22.101 INFO:teuthology.task.install.rpm:Package version is 19.2.3-678.ge911bdeb 2026-03-10T09:54:22.767 INFO:teuthology.packaging:Writing yum repo: [ceph] name=ceph packages for $basearch baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/$basearch enabled=1 gpgcheck=0 type=rpm-md [ceph-noarch] name=ceph noarch packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/noarch enabled=1 gpgcheck=0 type=rpm-md [ceph-source] name=ceph source packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/SRPMS enabled=1 gpgcheck=0 type=rpm-md 2026-03-10T09:54:22.767 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T09:54:22.767 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/etc/yum.repos.d/ceph.repo 2026-03-10T09:54:22.791 INFO:teuthology.packaging:Writing yum repo: [ceph] name=ceph packages for $basearch baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/$basearch enabled=1 gpgcheck=0 type=rpm-md [ceph-noarch] name=ceph noarch packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/noarch enabled=1 gpgcheck=0 type=rpm-md [ceph-source] name=ceph source packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/SRPMS enabled=1 gpgcheck=0 type=rpm-md 2026-03-10T09:54:22.791 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T09:54:22.791 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/etc/yum.repos.d/ceph.repo 2026-03-10T09:54:22.804 INFO:teuthology.task.install.rpm:Installing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd, bzip2, perl-Test-Harness, python3-xmltodict, python3-jmespath on remote rpm x86_64 2026-03-10T09:54:22.804 DEBUG:teuthology.orchestra.run.vm01:> if test -f /etc/yum.repos.d/ceph.repo ; then sudo sed -i -e ':a;N;$!ba;s/enabled=1\ngpg/enabled=1\npriority=1\ngpg/g' -e 's;ref/[a-zA-Z0-9_-]*/;sha1/e911bdebe5c8faa3800735d1568fcdca65db60df/;g' /etc/yum.repos.d/ceph.repo ; fi 2026-03-10T09:54:22.813 INFO:teuthology.packaging:Writing yum repo: [ceph] name=ceph packages for $basearch baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/$basearch enabled=1 gpgcheck=0 type=rpm-md [ceph-noarch] name=ceph noarch packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/noarch enabled=1 gpgcheck=0 type=rpm-md [ceph-source] name=ceph source packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/SRPMS enabled=1 gpgcheck=0 type=rpm-md 2026-03-10T09:54:22.814 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T09:54:22.814 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/etc/yum.repos.d/ceph.repo 2026-03-10T09:54:22.826 INFO:teuthology.task.install.rpm:Installing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd, bzip2, perl-Test-Harness, python3-xmltodict, python3-jmespath on remote rpm x86_64 2026-03-10T09:54:22.826 DEBUG:teuthology.orchestra.run.vm02:> if test -f /etc/yum.repos.d/ceph.repo ; then sudo sed -i -e ':a;N;$!ba;s/enabled=1\ngpg/enabled=1\npriority=1\ngpg/g' -e 's;ref/[a-zA-Z0-9_-]*/;sha1/e911bdebe5c8faa3800735d1568fcdca65db60df/;g' /etc/yum.repos.d/ceph.repo ; fi 2026-03-10T09:54:22.844 INFO:teuthology.task.install.rpm:Installing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd, bzip2, perl-Test-Harness, python3-xmltodict, python3-jmespath on remote rpm x86_64 2026-03-10T09:54:22.844 DEBUG:teuthology.orchestra.run.vm08:> if test -f /etc/yum.repos.d/ceph.repo ; then sudo sed -i -e ':a;N;$!ba;s/enabled=1\ngpg/enabled=1\npriority=1\ngpg/g' -e 's;ref/[a-zA-Z0-9_-]*/;sha1/e911bdebe5c8faa3800735d1568fcdca65db60df/;g' /etc/yum.repos.d/ceph.repo ; fi 2026-03-10T09:54:22.897 DEBUG:teuthology.orchestra.run.vm01:> sudo touch -a /etc/yum/pluginconf.d/priorities.conf ; test -e /etc/yum/pluginconf.d/priorities.conf.orig || sudo cp -af /etc/yum/pluginconf.d/priorities.conf /etc/yum/pluginconf.d/priorities.conf.orig 2026-03-10T09:54:22.908 DEBUG:teuthology.orchestra.run.vm02:> sudo touch -a /etc/yum/pluginconf.d/priorities.conf ; test -e /etc/yum/pluginconf.d/priorities.conf.orig || sudo cp -af /etc/yum/pluginconf.d/priorities.conf /etc/yum/pluginconf.d/priorities.conf.orig 2026-03-10T09:54:22.923 DEBUG:teuthology.orchestra.run.vm08:> sudo touch -a /etc/yum/pluginconf.d/priorities.conf ; test -e /etc/yum/pluginconf.d/priorities.conf.orig || sudo cp -af /etc/yum/pluginconf.d/priorities.conf /etc/yum/pluginconf.d/priorities.conf.orig 2026-03-10T09:54:22.956 DEBUG:teuthology.orchestra.run.vm01:> grep check_obsoletes /etc/yum/pluginconf.d/priorities.conf && sudo sed -i 's/check_obsoletes.*0/check_obsoletes = 1/g' /etc/yum/pluginconf.d/priorities.conf || echo 'check_obsoletes = 1' | sudo tee -a /etc/yum/pluginconf.d/priorities.conf 2026-03-10T09:54:22.991 DEBUG:teuthology.orchestra.run.vm02:> grep check_obsoletes /etc/yum/pluginconf.d/priorities.conf && sudo sed -i 's/check_obsoletes.*0/check_obsoletes = 1/g' /etc/yum/pluginconf.d/priorities.conf || echo 'check_obsoletes = 1' | sudo tee -a /etc/yum/pluginconf.d/priorities.conf 2026-03-10T09:54:22.994 INFO:teuthology.orchestra.run.vm01.stdout:check_obsoletes = 1 2026-03-10T09:54:22.996 DEBUG:teuthology.orchestra.run.vm01:> sudo yum clean all 2026-03-10T09:54:23.018 DEBUG:teuthology.orchestra.run.vm08:> grep check_obsoletes /etc/yum/pluginconf.d/priorities.conf && sudo sed -i 's/check_obsoletes.*0/check_obsoletes = 1/g' /etc/yum/pluginconf.d/priorities.conf || echo 'check_obsoletes = 1' | sudo tee -a /etc/yum/pluginconf.d/priorities.conf 2026-03-10T09:54:23.050 INFO:teuthology.orchestra.run.vm08.stdout:check_obsoletes = 1 2026-03-10T09:54:23.051 DEBUG:teuthology.orchestra.run.vm08:> sudo yum clean all 2026-03-10T09:54:23.067 INFO:teuthology.orchestra.run.vm02.stdout:check_obsoletes = 1 2026-03-10T09:54:23.068 DEBUG:teuthology.orchestra.run.vm02:> sudo yum clean all 2026-03-10T09:54:23.187 INFO:teuthology.orchestra.run.vm01.stdout:41 files removed 2026-03-10T09:54:23.221 DEBUG:teuthology.orchestra.run.vm01:> sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd bzip2 perl-Test-Harness python3-xmltodict python3-jmespath 2026-03-10T09:54:23.247 INFO:teuthology.orchestra.run.vm08.stdout:41 files removed 2026-03-10T09:54:23.269 INFO:teuthology.orchestra.run.vm02.stdout:41 files removed 2026-03-10T09:54:23.274 DEBUG:teuthology.orchestra.run.vm08:> sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd bzip2 perl-Test-Harness python3-xmltodict python3-jmespath 2026-03-10T09:54:23.294 DEBUG:teuthology.orchestra.run.vm02:> sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd bzip2 perl-Test-Harness python3-xmltodict python3-jmespath 2026-03-10T09:54:25.119 INFO:teuthology.orchestra.run.vm08.stdout:ceph packages for x86_64 50 kB/s | 84 kB 00:01 2026-03-10T09:54:25.158 INFO:teuthology.orchestra.run.vm02.stdout:ceph packages for x86_64 49 kB/s | 84 kB 00:01 2026-03-10T09:54:25.159 INFO:teuthology.orchestra.run.vm01.stdout:ceph packages for x86_64 48 kB/s | 84 kB 00:01 2026-03-10T09:54:26.506 INFO:teuthology.orchestra.run.vm08.stdout:ceph noarch packages 8.5 kB/s | 12 kB 00:01 2026-03-10T09:54:26.534 INFO:teuthology.orchestra.run.vm01.stdout:ceph noarch packages 8.6 kB/s | 12 kB 00:01 2026-03-10T09:54:26.539 INFO:teuthology.orchestra.run.vm02.stdout:ceph noarch packages 8.6 kB/s | 12 kB 00:01 2026-03-10T09:54:27.872 INFO:teuthology.orchestra.run.vm02.stdout:ceph source packages 1.4 kB/s | 1.9 kB 00:01 2026-03-10T09:54:27.893 INFO:teuthology.orchestra.run.vm08.stdout:ceph source packages 1.4 kB/s | 1.9 kB 00:01 2026-03-10T09:54:27.897 INFO:teuthology.orchestra.run.vm01.stdout:ceph source packages 1.4 kB/s | 1.9 kB 00:01 2026-03-10T09:54:28.351 INFO:teuthology.orchestra.run.vm01.stdout:CentOS Stream 9 - BaseOS 21 MB/s | 8.9 MB 00:00 2026-03-10T09:54:28.705 INFO:teuthology.orchestra.run.vm02.stdout:CentOS Stream 9 - BaseOS 11 MB/s | 8.9 MB 00:00 2026-03-10T09:54:29.661 INFO:teuthology.orchestra.run.vm08.stdout:CentOS Stream 9 - BaseOS 5.1 MB/s | 8.9 MB 00:01 2026-03-10T09:54:30.085 INFO:teuthology.orchestra.run.vm01.stdout:CentOS Stream 9 - AppStream 25 MB/s | 27 MB 00:01 2026-03-10T09:54:31.349 INFO:teuthology.orchestra.run.vm02.stdout:CentOS Stream 9 - AppStream 14 MB/s | 27 MB 00:02 2026-03-10T09:54:31.965 INFO:teuthology.orchestra.run.vm08.stdout:CentOS Stream 9 - AppStream 17 MB/s | 27 MB 00:01 2026-03-10T09:54:33.975 INFO:teuthology.orchestra.run.vm01.stdout:CentOS Stream 9 - CRB 8.0 MB/s | 8.0 MB 00:00 2026-03-10T09:54:35.302 INFO:teuthology.orchestra.run.vm01.stdout:CentOS Stream 9 - Extras packages 44 kB/s | 20 kB 00:00 2026-03-10T09:54:35.303 INFO:teuthology.orchestra.run.vm08.stdout:CentOS Stream 9 - CRB 17 MB/s | 8.0 MB 00:00 2026-03-10T09:54:35.418 INFO:teuthology.orchestra.run.vm02.stdout:CentOS Stream 9 - CRB 6.8 MB/s | 8.0 MB 00:01 2026-03-10T09:54:36.222 INFO:teuthology.orchestra.run.vm01.stdout:Extra Packages for Enterprise Linux 24 MB/s | 20 MB 00:00 2026-03-10T09:54:37.061 INFO:teuthology.orchestra.run.vm08.stdout:CentOS Stream 9 - Extras packages 24 kB/s | 20 kB 00:00 2026-03-10T09:54:37.183 INFO:teuthology.orchestra.run.vm02.stdout:CentOS Stream 9 - Extras packages 25 kB/s | 20 kB 00:00 2026-03-10T09:54:37.644 INFO:teuthology.orchestra.run.vm08.stdout:Extra Packages for Enterprise Linux 41 MB/s | 20 MB 00:00 2026-03-10T09:54:37.726 INFO:teuthology.orchestra.run.vm02.stdout:Extra Packages for Enterprise Linux 44 MB/s | 20 MB 00:00 2026-03-10T09:54:41.150 INFO:teuthology.orchestra.run.vm01.stdout:lab-extras 63 kB/s | 50 kB 00:00 2026-03-10T09:54:42.600 INFO:teuthology.orchestra.run.vm08.stdout:lab-extras 63 kB/s | 50 kB 00:00 2026-03-10T09:54:42.657 INFO:teuthology.orchestra.run.vm01.stdout:Package librados2-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-10T09:54:42.657 INFO:teuthology.orchestra.run.vm01.stdout:Package librbd1-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-10T09:54:42.662 INFO:teuthology.orchestra.run.vm01.stdout:Package bzip2-1.0.8-11.el9.x86_64 is already installed. 2026-03-10T09:54:42.662 INFO:teuthology.orchestra.run.vm01.stdout:Package perl-Test-Harness-1:3.42-461.el9.noarch is already installed. 2026-03-10T09:54:42.675 INFO:teuthology.orchestra.run.vm02.stdout:lab-extras 63 kB/s | 50 kB 00:00 2026-03-10T09:54:42.697 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T09:54:42.702 INFO:teuthology.orchestra.run.vm01.stdout:====================================================================================== 2026-03-10T09:54:42.702 INFO:teuthology.orchestra.run.vm01.stdout: Package Arch Version Repository Size 2026-03-10T09:54:42.702 INFO:teuthology.orchestra.run.vm01.stdout:====================================================================================== 2026-03-10T09:54:42.702 INFO:teuthology.orchestra.run.vm01.stdout:Installing: 2026-03-10T09:54:42.702 INFO:teuthology.orchestra.run.vm01.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 6.5 k 2026-03-10T09:54:42.703 INFO:teuthology.orchestra.run.vm01.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.5 M 2026-03-10T09:54:42.703 INFO:teuthology.orchestra.run.vm01.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.2 M 2026-03-10T09:54:42.703 INFO:teuthology.orchestra.run.vm01.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 145 k 2026-03-10T09:54:42.703 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.1 M 2026-03-10T09:54:42.703 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 150 k 2026-03-10T09:54:42.703 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 3.8 M 2026-03-10T09:54:42.703 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 7.4 M 2026-03-10T09:54:42.703 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 49 k 2026-03-10T09:54:42.703 INFO:teuthology.orchestra.run.vm01.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 11 M 2026-03-10T09:54:42.703 INFO:teuthology.orchestra.run.vm01.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 50 M 2026-03-10T09:54:42.703 INFO:teuthology.orchestra.run.vm01.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 299 k 2026-03-10T09:54:42.703 INFO:teuthology.orchestra.run.vm01.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 769 k 2026-03-10T09:54:42.703 INFO:teuthology.orchestra.run.vm01.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 34 k 2026-03-10T09:54:42.703 INFO:teuthology.orchestra.run.vm01.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.0 M 2026-03-10T09:54:42.703 INFO:teuthology.orchestra.run.vm01.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 127 k 2026-03-10T09:54:42.703 INFO:teuthology.orchestra.run.vm01.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 165 k 2026-03-10T09:54:42.703 INFO:teuthology.orchestra.run.vm01.stdout: python3-jmespath noarch 1.0.1-1.el9 appstream 48 k 2026-03-10T09:54:42.703 INFO:teuthology.orchestra.run.vm01.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 323 k 2026-03-10T09:54:42.703 INFO:teuthology.orchestra.run.vm01.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 303 k 2026-03-10T09:54:42.703 INFO:teuthology.orchestra.run.vm01.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 100 k 2026-03-10T09:54:42.703 INFO:teuthology.orchestra.run.vm01.stdout: python3-xmltodict noarch 0.12.0-15.el9 epel 22 k 2026-03-10T09:54:42.703 INFO:teuthology.orchestra.run.vm01.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 85 k 2026-03-10T09:54:42.703 INFO:teuthology.orchestra.run.vm01.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.1 M 2026-03-10T09:54:42.703 INFO:teuthology.orchestra.run.vm01.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 171 k 2026-03-10T09:54:42.703 INFO:teuthology.orchestra.run.vm01.stdout:Upgrading: 2026-03-10T09:54:42.703 INFO:teuthology.orchestra.run.vm01.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.4 M 2026-03-10T09:54:42.703 INFO:teuthology.orchestra.run.vm01.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.2 M 2026-03-10T09:54:42.703 INFO:teuthology.orchestra.run.vm01.stdout:Installing dependencies: 2026-03-10T09:54:42.703 INFO:teuthology.orchestra.run.vm01.stdout: abseil-cpp x86_64 20211102.0-4.el9 epel 551 k 2026-03-10T09:54:42.703 INFO:teuthology.orchestra.run.vm01.stdout: boost-program-options x86_64 1.75.0-13.el9 appstream 104 k 2026-03-10T09:54:42.703 INFO:teuthology.orchestra.run.vm01.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 22 M 2026-03-10T09:54:42.704 INFO:teuthology.orchestra.run.vm01.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 31 k 2026-03-10T09:54:42.704 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 2.4 M 2026-03-10T09:54:42.704 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 253 k 2026-03-10T09:54:42.704 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 4.7 M 2026-03-10T09:54:42.704 INFO:teuthology.orchestra.run.vm01.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 17 M 2026-03-10T09:54:42.704 INFO:teuthology.orchestra.run.vm01.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 17 k 2026-03-10T09:54:42.704 INFO:teuthology.orchestra.run.vm01.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 25 k 2026-03-10T09:54:42.704 INFO:teuthology.orchestra.run.vm01.stdout: cryptsetup x86_64 2.8.1-3.el9 baseos 351 k 2026-03-10T09:54:42.704 INFO:teuthology.orchestra.run.vm01.stdout: flexiblas x86_64 3.0.4-9.el9 appstream 30 k 2026-03-10T09:54:42.704 INFO:teuthology.orchestra.run.vm01.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 appstream 3.0 M 2026-03-10T09:54:42.704 INFO:teuthology.orchestra.run.vm01.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 appstream 15 k 2026-03-10T09:54:42.704 INFO:teuthology.orchestra.run.vm01.stdout: gperftools-libs x86_64 2.9.1-3.el9 epel 308 k 2026-03-10T09:54:42.704 INFO:teuthology.orchestra.run.vm01.stdout: grpc-data noarch 1.46.7-10.el9 epel 19 k 2026-03-10T09:54:42.704 INFO:teuthology.orchestra.run.vm01.stdout: ledmon-libs x86_64 1.1.0-3.el9 baseos 40 k 2026-03-10T09:54:42.704 INFO:teuthology.orchestra.run.vm01.stdout: libarrow x86_64 9.0.0-15.el9 epel 4.4 M 2026-03-10T09:54:42.704 INFO:teuthology.orchestra.run.vm01.stdout: libarrow-doc noarch 9.0.0-15.el9 epel 25 k 2026-03-10T09:54:42.704 INFO:teuthology.orchestra.run.vm01.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 163 k 2026-03-10T09:54:42.704 INFO:teuthology.orchestra.run.vm01.stdout: libconfig x86_64 1.7.2-9.el9 baseos 72 k 2026-03-10T09:54:42.704 INFO:teuthology.orchestra.run.vm01.stdout: libgfortran x86_64 11.5.0-14.el9 baseos 794 k 2026-03-10T09:54:42.704 INFO:teuthology.orchestra.run.vm01.stdout: libnbd x86_64 1.20.3-4.el9 appstream 164 k 2026-03-10T09:54:42.704 INFO:teuthology.orchestra.run.vm01.stdout: liboath x86_64 2.6.12-1.el9 epel 49 k 2026-03-10T09:54:42.704 INFO:teuthology.orchestra.run.vm01.stdout: libpmemobj x86_64 1.12.1-1.el9 appstream 160 k 2026-03-10T09:54:42.704 INFO:teuthology.orchestra.run.vm01.stdout: libquadmath x86_64 11.5.0-14.el9 baseos 184 k 2026-03-10T09:54:42.704 INFO:teuthology.orchestra.run.vm01.stdout: librabbitmq x86_64 0.11.0-7.el9 appstream 45 k 2026-03-10T09:54:42.704 INFO:teuthology.orchestra.run.vm01.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 503 k 2026-03-10T09:54:42.704 INFO:teuthology.orchestra.run.vm01.stdout: librdkafka x86_64 1.6.1-102.el9 appstream 662 k 2026-03-10T09:54:42.704 INFO:teuthology.orchestra.run.vm01.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.4 M 2026-03-10T09:54:42.704 INFO:teuthology.orchestra.run.vm01.stdout: libstoragemgmt x86_64 1.10.1-1.el9 appstream 246 k 2026-03-10T09:54:42.704 INFO:teuthology.orchestra.run.vm01.stdout: libunwind x86_64 1.6.2-1.el9 epel 67 k 2026-03-10T09:54:42.704 INFO:teuthology.orchestra.run.vm01.stdout: libxslt x86_64 1.1.34-12.el9 appstream 233 k 2026-03-10T09:54:42.704 INFO:teuthology.orchestra.run.vm01.stdout: lttng-ust x86_64 2.12.0-6.el9 appstream 292 k 2026-03-10T09:54:42.704 INFO:teuthology.orchestra.run.vm01.stdout: lua x86_64 5.4.4-4.el9 appstream 188 k 2026-03-10T09:54:42.704 INFO:teuthology.orchestra.run.vm01.stdout: lua-devel x86_64 5.4.4-4.el9 crb 22 k 2026-03-10T09:54:42.704 INFO:teuthology.orchestra.run.vm01.stdout: luarocks noarch 3.9.2-5.el9 epel 151 k 2026-03-10T09:54:42.704 INFO:teuthology.orchestra.run.vm01.stdout: mailcap noarch 2.1.49-5.el9 baseos 33 k 2026-03-10T09:54:42.704 INFO:teuthology.orchestra.run.vm01.stdout: openblas x86_64 0.3.29-1.el9 appstream 42 k 2026-03-10T09:54:42.704 INFO:teuthology.orchestra.run.vm01.stdout: openblas-openmp x86_64 0.3.29-1.el9 appstream 5.3 M 2026-03-10T09:54:42.704 INFO:teuthology.orchestra.run.vm01.stdout: parquet-libs x86_64 9.0.0-15.el9 epel 838 k 2026-03-10T09:54:42.704 INFO:teuthology.orchestra.run.vm01.stdout: pciutils x86_64 3.7.0-7.el9 baseos 93 k 2026-03-10T09:54:42.704 INFO:teuthology.orchestra.run.vm01.stdout: protobuf x86_64 3.14.0-17.el9 appstream 1.0 M 2026-03-10T09:54:42.705 INFO:teuthology.orchestra.run.vm01.stdout: protobuf-compiler x86_64 3.14.0-17.el9 crb 862 k 2026-03-10T09:54:42.705 INFO:teuthology.orchestra.run.vm01.stdout: python3-asyncssh noarch 2.13.2-5.el9 epel 548 k 2026-03-10T09:54:42.705 INFO:teuthology.orchestra.run.vm01.stdout: python3-autocommand noarch 2.2.2-8.el9 epel 29 k 2026-03-10T09:54:42.705 INFO:teuthology.orchestra.run.vm01.stdout: python3-babel noarch 2.9.1-2.el9 appstream 6.0 M 2026-03-10T09:54:42.705 INFO:teuthology.orchestra.run.vm01.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 epel 60 k 2026-03-10T09:54:42.705 INFO:teuthology.orchestra.run.vm01.stdout: python3-bcrypt x86_64 3.2.2-1.el9 epel 43 k 2026-03-10T09:54:42.705 INFO:teuthology.orchestra.run.vm01.stdout: python3-cachetools noarch 4.2.4-1.el9 epel 32 k 2026-03-10T09:54:42.705 INFO:teuthology.orchestra.run.vm01.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 45 k 2026-03-10T09:54:42.705 INFO:teuthology.orchestra.run.vm01.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 142 k 2026-03-10T09:54:42.705 INFO:teuthology.orchestra.run.vm01.stdout: python3-certifi noarch 2023.05.07-4.el9 epel 14 k 2026-03-10T09:54:42.705 INFO:teuthology.orchestra.run.vm01.stdout: python3-cffi x86_64 1.14.5-5.el9 baseos 253 k 2026-03-10T09:54:42.705 INFO:teuthology.orchestra.run.vm01.stdout: python3-cheroot noarch 10.0.1-4.el9 epel 173 k 2026-03-10T09:54:42.705 INFO:teuthology.orchestra.run.vm01.stdout: python3-cherrypy noarch 18.6.1-2.el9 epel 358 k 2026-03-10T09:54:42.705 INFO:teuthology.orchestra.run.vm01.stdout: python3-cryptography x86_64 36.0.1-5.el9 baseos 1.2 M 2026-03-10T09:54:42.705 INFO:teuthology.orchestra.run.vm01.stdout: python3-devel x86_64 3.9.25-3.el9 appstream 244 k 2026-03-10T09:54:42.705 INFO:teuthology.orchestra.run.vm01.stdout: python3-google-auth noarch 1:2.45.0-1.el9 epel 254 k 2026-03-10T09:54:42.705 INFO:teuthology.orchestra.run.vm01.stdout: python3-grpcio x86_64 1.46.7-10.el9 epel 2.0 M 2026-03-10T09:54:42.705 INFO:teuthology.orchestra.run.vm01.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 epel 144 k 2026-03-10T09:54:42.705 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco noarch 8.2.1-3.el9 epel 11 k 2026-03-10T09:54:42.705 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 epel 18 k 2026-03-10T09:54:42.705 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 epel 23 k 2026-03-10T09:54:42.705 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-context noarch 6.0.1-3.el9 epel 20 k 2026-03-10T09:54:42.705 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 epel 19 k 2026-03-10T09:54:42.705 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-text noarch 4.0.0-2.el9 epel 26 k 2026-03-10T09:54:42.705 INFO:teuthology.orchestra.run.vm01.stdout: python3-jinja2 noarch 2.11.3-8.el9 appstream 249 k 2026-03-10T09:54:42.705 INFO:teuthology.orchestra.run.vm01.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 epel 1.0 M 2026-03-10T09:54:42.705 INFO:teuthology.orchestra.run.vm01.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 appstream 177 k 2026-03-10T09:54:42.705 INFO:teuthology.orchestra.run.vm01.stdout: python3-logutils noarch 0.3.5-21.el9 epel 46 k 2026-03-10T09:54:42.705 INFO:teuthology.orchestra.run.vm01.stdout: python3-mako noarch 1.1.4-6.el9 appstream 172 k 2026-03-10T09:54:42.705 INFO:teuthology.orchestra.run.vm01.stdout: python3-markupsafe x86_64 1.1.1-12.el9 appstream 35 k 2026-03-10T09:54:42.705 INFO:teuthology.orchestra.run.vm01.stdout: python3-more-itertools noarch 8.12.0-2.el9 epel 79 k 2026-03-10T09:54:42.705 INFO:teuthology.orchestra.run.vm01.stdout: python3-natsort noarch 7.1.1-5.el9 epel 58 k 2026-03-10T09:54:42.705 INFO:teuthology.orchestra.run.vm01.stdout: python3-numpy x86_64 1:1.23.5-2.el9 appstream 6.1 M 2026-03-10T09:54:42.705 INFO:teuthology.orchestra.run.vm01.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 appstream 442 k 2026-03-10T09:54:42.705 INFO:teuthology.orchestra.run.vm01.stdout: python3-packaging noarch 20.9-5.el9 appstream 77 k 2026-03-10T09:54:42.705 INFO:teuthology.orchestra.run.vm01.stdout: python3-pecan noarch 1.4.2-3.el9 epel 272 k 2026-03-10T09:54:42.705 INFO:teuthology.orchestra.run.vm01.stdout: python3-ply noarch 3.11-14.el9 baseos 106 k 2026-03-10T09:54:42.705 INFO:teuthology.orchestra.run.vm01.stdout: python3-portend noarch 3.1.0-2.el9 epel 16 k 2026-03-10T09:54:42.705 INFO:teuthology.orchestra.run.vm01.stdout: python3-protobuf noarch 3.14.0-17.el9 appstream 267 k 2026-03-10T09:54:42.706 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 epel 90 k 2026-03-10T09:54:42.706 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyasn1 noarch 0.4.8-7.el9 appstream 157 k 2026-03-10T09:54:42.706 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 appstream 277 k 2026-03-10T09:54:42.706 INFO:teuthology.orchestra.run.vm01.stdout: python3-pycparser noarch 2.20-6.el9 baseos 135 k 2026-03-10T09:54:42.706 INFO:teuthology.orchestra.run.vm01.stdout: python3-repoze-lru noarch 0.7-16.el9 epel 31 k 2026-03-10T09:54:42.706 INFO:teuthology.orchestra.run.vm01.stdout: python3-requests noarch 2.25.1-10.el9 baseos 126 k 2026-03-10T09:54:42.706 INFO:teuthology.orchestra.run.vm01.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 appstream 54 k 2026-03-10T09:54:42.706 INFO:teuthology.orchestra.run.vm01.stdout: python3-routes noarch 2.5.1-5.el9 epel 188 k 2026-03-10T09:54:42.706 INFO:teuthology.orchestra.run.vm01.stdout: python3-rsa noarch 4.9-2.el9 epel 59 k 2026-03-10T09:54:42.706 INFO:teuthology.orchestra.run.vm01.stdout: python3-scipy x86_64 1.9.3-2.el9 appstream 19 M 2026-03-10T09:54:42.706 INFO:teuthology.orchestra.run.vm01.stdout: python3-tempora noarch 5.0.0-2.el9 epel 36 k 2026-03-10T09:54:42.706 INFO:teuthology.orchestra.run.vm01.stdout: python3-toml noarch 0.10.2-6.el9 appstream 42 k 2026-03-10T09:54:42.706 INFO:teuthology.orchestra.run.vm01.stdout: python3-typing-extensions noarch 4.15.0-1.el9 epel 86 k 2026-03-10T09:54:42.706 INFO:teuthology.orchestra.run.vm01.stdout: python3-urllib3 noarch 1.26.5-7.el9 baseos 218 k 2026-03-10T09:54:42.706 INFO:teuthology.orchestra.run.vm01.stdout: python3-webob noarch 1.8.8-2.el9 epel 230 k 2026-03-10T09:54:42.706 INFO:teuthology.orchestra.run.vm01.stdout: python3-websocket-client noarch 1.2.3-2.el9 epel 90 k 2026-03-10T09:54:42.706 INFO:teuthology.orchestra.run.vm01.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 epel 427 k 2026-03-10T09:54:42.706 INFO:teuthology.orchestra.run.vm01.stdout: python3-zc-lockfile noarch 2.0-10.el9 epel 20 k 2026-03-10T09:54:42.706 INFO:teuthology.orchestra.run.vm01.stdout: qatlib x86_64 25.08.0-2.el9 appstream 240 k 2026-03-10T09:54:42.706 INFO:teuthology.orchestra.run.vm01.stdout: qatzip-libs x86_64 1.3.1-1.el9 appstream 66 k 2026-03-10T09:54:42.706 INFO:teuthology.orchestra.run.vm01.stdout: re2 x86_64 1:20211101-20.el9 epel 191 k 2026-03-10T09:54:42.706 INFO:teuthology.orchestra.run.vm01.stdout: socat x86_64 1.7.4.1-8.el9 appstream 303 k 2026-03-10T09:54:42.706 INFO:teuthology.orchestra.run.vm01.stdout: thrift x86_64 0.15.0-4.el9 epel 1.6 M 2026-03-10T09:54:42.706 INFO:teuthology.orchestra.run.vm01.stdout: unzip x86_64 6.0-59.el9 baseos 182 k 2026-03-10T09:54:42.706 INFO:teuthology.orchestra.run.vm01.stdout: xmlstarlet x86_64 1.6.1-20.el9 appstream 64 k 2026-03-10T09:54:42.706 INFO:teuthology.orchestra.run.vm01.stdout: zip x86_64 3.0-35.el9 baseos 266 k 2026-03-10T09:54:42.706 INFO:teuthology.orchestra.run.vm01.stdout:Installing weak dependencies: 2026-03-10T09:54:42.706 INFO:teuthology.orchestra.run.vm01.stdout: qatlib-service x86_64 25.08.0-2.el9 appstream 37 k 2026-03-10T09:54:42.706 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:54:42.706 INFO:teuthology.orchestra.run.vm01.stdout:Transaction Summary 2026-03-10T09:54:42.706 INFO:teuthology.orchestra.run.vm01.stdout:====================================================================================== 2026-03-10T09:54:42.706 INFO:teuthology.orchestra.run.vm01.stdout:Install 134 Packages 2026-03-10T09:54:42.706 INFO:teuthology.orchestra.run.vm01.stdout:Upgrade 2 Packages 2026-03-10T09:54:42.706 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:54:42.707 INFO:teuthology.orchestra.run.vm01.stdout:Total download size: 210 M 2026-03-10T09:54:42.707 INFO:teuthology.orchestra.run.vm01.stdout:Downloading Packages: 2026-03-10T09:54:43.992 INFO:teuthology.orchestra.run.vm08.stdout:Package librados2-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-10T09:54:43.992 INFO:teuthology.orchestra.run.vm08.stdout:Package librbd1-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-10T09:54:43.997 INFO:teuthology.orchestra.run.vm08.stdout:Package bzip2-1.0.8-11.el9.x86_64 is already installed. 2026-03-10T09:54:43.997 INFO:teuthology.orchestra.run.vm08.stdout:Package perl-Test-Harness-1:3.42-461.el9.noarch is already installed. 2026-03-10T09:54:44.028 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T09:54:44.033 INFO:teuthology.orchestra.run.vm08.stdout:====================================================================================== 2026-03-10T09:54:44.033 INFO:teuthology.orchestra.run.vm08.stdout: Package Arch Version Repository Size 2026-03-10T09:54:44.033 INFO:teuthology.orchestra.run.vm08.stdout:====================================================================================== 2026-03-10T09:54:44.033 INFO:teuthology.orchestra.run.vm08.stdout:Installing: 2026-03-10T09:54:44.033 INFO:teuthology.orchestra.run.vm08.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 6.5 k 2026-03-10T09:54:44.033 INFO:teuthology.orchestra.run.vm08.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.5 M 2026-03-10T09:54:44.033 INFO:teuthology.orchestra.run.vm08.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.2 M 2026-03-10T09:54:44.033 INFO:teuthology.orchestra.run.vm08.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 145 k 2026-03-10T09:54:44.033 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.1 M 2026-03-10T09:54:44.033 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 150 k 2026-03-10T09:54:44.033 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 3.8 M 2026-03-10T09:54:44.033 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 7.4 M 2026-03-10T09:54:44.033 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 49 k 2026-03-10T09:54:44.033 INFO:teuthology.orchestra.run.vm08.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 11 M 2026-03-10T09:54:44.033 INFO:teuthology.orchestra.run.vm08.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 50 M 2026-03-10T09:54:44.033 INFO:teuthology.orchestra.run.vm08.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 299 k 2026-03-10T09:54:44.034 INFO:teuthology.orchestra.run.vm08.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 769 k 2026-03-10T09:54:44.034 INFO:teuthology.orchestra.run.vm08.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 34 k 2026-03-10T09:54:44.034 INFO:teuthology.orchestra.run.vm08.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.0 M 2026-03-10T09:54:44.034 INFO:teuthology.orchestra.run.vm08.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 127 k 2026-03-10T09:54:44.034 INFO:teuthology.orchestra.run.vm08.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 165 k 2026-03-10T09:54:44.034 INFO:teuthology.orchestra.run.vm08.stdout: python3-jmespath noarch 1.0.1-1.el9 appstream 48 k 2026-03-10T09:54:44.034 INFO:teuthology.orchestra.run.vm08.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 323 k 2026-03-10T09:54:44.034 INFO:teuthology.orchestra.run.vm08.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 303 k 2026-03-10T09:54:44.034 INFO:teuthology.orchestra.run.vm08.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 100 k 2026-03-10T09:54:44.034 INFO:teuthology.orchestra.run.vm08.stdout: python3-xmltodict noarch 0.12.0-15.el9 epel 22 k 2026-03-10T09:54:44.034 INFO:teuthology.orchestra.run.vm08.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 85 k 2026-03-10T09:54:44.034 INFO:teuthology.orchestra.run.vm08.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.1 M 2026-03-10T09:54:44.034 INFO:teuthology.orchestra.run.vm08.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 171 k 2026-03-10T09:54:44.034 INFO:teuthology.orchestra.run.vm08.stdout:Upgrading: 2026-03-10T09:54:44.034 INFO:teuthology.orchestra.run.vm08.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.4 M 2026-03-10T09:54:44.034 INFO:teuthology.orchestra.run.vm08.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.2 M 2026-03-10T09:54:44.034 INFO:teuthology.orchestra.run.vm08.stdout:Installing dependencies: 2026-03-10T09:54:44.034 INFO:teuthology.orchestra.run.vm08.stdout: abseil-cpp x86_64 20211102.0-4.el9 epel 551 k 2026-03-10T09:54:44.034 INFO:teuthology.orchestra.run.vm08.stdout: boost-program-options x86_64 1.75.0-13.el9 appstream 104 k 2026-03-10T09:54:44.034 INFO:teuthology.orchestra.run.vm08.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 22 M 2026-03-10T09:54:44.034 INFO:teuthology.orchestra.run.vm08.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 31 k 2026-03-10T09:54:44.034 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 2.4 M 2026-03-10T09:54:44.034 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 253 k 2026-03-10T09:54:44.034 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 4.7 M 2026-03-10T09:54:44.034 INFO:teuthology.orchestra.run.vm08.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 17 M 2026-03-10T09:54:44.034 INFO:teuthology.orchestra.run.vm08.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 17 k 2026-03-10T09:54:44.034 INFO:teuthology.orchestra.run.vm08.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 25 k 2026-03-10T09:54:44.034 INFO:teuthology.orchestra.run.vm08.stdout: cryptsetup x86_64 2.8.1-3.el9 baseos 351 k 2026-03-10T09:54:44.034 INFO:teuthology.orchestra.run.vm08.stdout: flexiblas x86_64 3.0.4-9.el9 appstream 30 k 2026-03-10T09:54:44.034 INFO:teuthology.orchestra.run.vm08.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 appstream 3.0 M 2026-03-10T09:54:44.034 INFO:teuthology.orchestra.run.vm08.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 appstream 15 k 2026-03-10T09:54:44.034 INFO:teuthology.orchestra.run.vm08.stdout: gperftools-libs x86_64 2.9.1-3.el9 epel 308 k 2026-03-10T09:54:44.034 INFO:teuthology.orchestra.run.vm08.stdout: grpc-data noarch 1.46.7-10.el9 epel 19 k 2026-03-10T09:54:44.034 INFO:teuthology.orchestra.run.vm08.stdout: ledmon-libs x86_64 1.1.0-3.el9 baseos 40 k 2026-03-10T09:54:44.034 INFO:teuthology.orchestra.run.vm08.stdout: libarrow x86_64 9.0.0-15.el9 epel 4.4 M 2026-03-10T09:54:44.034 INFO:teuthology.orchestra.run.vm08.stdout: libarrow-doc noarch 9.0.0-15.el9 epel 25 k 2026-03-10T09:54:44.034 INFO:teuthology.orchestra.run.vm08.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 163 k 2026-03-10T09:54:44.034 INFO:teuthology.orchestra.run.vm08.stdout: libconfig x86_64 1.7.2-9.el9 baseos 72 k 2026-03-10T09:54:44.034 INFO:teuthology.orchestra.run.vm08.stdout: libgfortran x86_64 11.5.0-14.el9 baseos 794 k 2026-03-10T09:54:44.034 INFO:teuthology.orchestra.run.vm08.stdout: libnbd x86_64 1.20.3-4.el9 appstream 164 k 2026-03-10T09:54:44.035 INFO:teuthology.orchestra.run.vm08.stdout: liboath x86_64 2.6.12-1.el9 epel 49 k 2026-03-10T09:54:44.035 INFO:teuthology.orchestra.run.vm08.stdout: libpmemobj x86_64 1.12.1-1.el9 appstream 160 k 2026-03-10T09:54:44.035 INFO:teuthology.orchestra.run.vm08.stdout: libquadmath x86_64 11.5.0-14.el9 baseos 184 k 2026-03-10T09:54:44.035 INFO:teuthology.orchestra.run.vm08.stdout: librabbitmq x86_64 0.11.0-7.el9 appstream 45 k 2026-03-10T09:54:44.035 INFO:teuthology.orchestra.run.vm08.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 503 k 2026-03-10T09:54:44.035 INFO:teuthology.orchestra.run.vm08.stdout: librdkafka x86_64 1.6.1-102.el9 appstream 662 k 2026-03-10T09:54:44.035 INFO:teuthology.orchestra.run.vm08.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.4 M 2026-03-10T09:54:44.035 INFO:teuthology.orchestra.run.vm08.stdout: libstoragemgmt x86_64 1.10.1-1.el9 appstream 246 k 2026-03-10T09:54:44.035 INFO:teuthology.orchestra.run.vm08.stdout: libunwind x86_64 1.6.2-1.el9 epel 67 k 2026-03-10T09:54:44.035 INFO:teuthology.orchestra.run.vm08.stdout: libxslt x86_64 1.1.34-12.el9 appstream 233 k 2026-03-10T09:54:44.035 INFO:teuthology.orchestra.run.vm08.stdout: lttng-ust x86_64 2.12.0-6.el9 appstream 292 k 2026-03-10T09:54:44.035 INFO:teuthology.orchestra.run.vm08.stdout: lua x86_64 5.4.4-4.el9 appstream 188 k 2026-03-10T09:54:44.035 INFO:teuthology.orchestra.run.vm08.stdout: lua-devel x86_64 5.4.4-4.el9 crb 22 k 2026-03-10T09:54:44.035 INFO:teuthology.orchestra.run.vm08.stdout: luarocks noarch 3.9.2-5.el9 epel 151 k 2026-03-10T09:54:44.035 INFO:teuthology.orchestra.run.vm08.stdout: mailcap noarch 2.1.49-5.el9 baseos 33 k 2026-03-10T09:54:44.035 INFO:teuthology.orchestra.run.vm08.stdout: openblas x86_64 0.3.29-1.el9 appstream 42 k 2026-03-10T09:54:44.035 INFO:teuthology.orchestra.run.vm08.stdout: openblas-openmp x86_64 0.3.29-1.el9 appstream 5.3 M 2026-03-10T09:54:44.035 INFO:teuthology.orchestra.run.vm08.stdout: parquet-libs x86_64 9.0.0-15.el9 epel 838 k 2026-03-10T09:54:44.035 INFO:teuthology.orchestra.run.vm08.stdout: pciutils x86_64 3.7.0-7.el9 baseos 93 k 2026-03-10T09:54:44.035 INFO:teuthology.orchestra.run.vm08.stdout: protobuf x86_64 3.14.0-17.el9 appstream 1.0 M 2026-03-10T09:54:44.035 INFO:teuthology.orchestra.run.vm08.stdout: protobuf-compiler x86_64 3.14.0-17.el9 crb 862 k 2026-03-10T09:54:44.035 INFO:teuthology.orchestra.run.vm08.stdout: python3-asyncssh noarch 2.13.2-5.el9 epel 548 k 2026-03-10T09:54:44.035 INFO:teuthology.orchestra.run.vm08.stdout: python3-autocommand noarch 2.2.2-8.el9 epel 29 k 2026-03-10T09:54:44.035 INFO:teuthology.orchestra.run.vm08.stdout: python3-babel noarch 2.9.1-2.el9 appstream 6.0 M 2026-03-10T09:54:44.035 INFO:teuthology.orchestra.run.vm08.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 epel 60 k 2026-03-10T09:54:44.035 INFO:teuthology.orchestra.run.vm08.stdout: python3-bcrypt x86_64 3.2.2-1.el9 epel 43 k 2026-03-10T09:54:44.035 INFO:teuthology.orchestra.run.vm08.stdout: python3-cachetools noarch 4.2.4-1.el9 epel 32 k 2026-03-10T09:54:44.035 INFO:teuthology.orchestra.run.vm08.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 45 k 2026-03-10T09:54:44.035 INFO:teuthology.orchestra.run.vm08.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 142 k 2026-03-10T09:54:44.035 INFO:teuthology.orchestra.run.vm08.stdout: python3-certifi noarch 2023.05.07-4.el9 epel 14 k 2026-03-10T09:54:44.035 INFO:teuthology.orchestra.run.vm08.stdout: python3-cffi x86_64 1.14.5-5.el9 baseos 253 k 2026-03-10T09:54:44.035 INFO:teuthology.orchestra.run.vm08.stdout: python3-cheroot noarch 10.0.1-4.el9 epel 173 k 2026-03-10T09:54:44.035 INFO:teuthology.orchestra.run.vm08.stdout: python3-cherrypy noarch 18.6.1-2.el9 epel 358 k 2026-03-10T09:54:44.035 INFO:teuthology.orchestra.run.vm08.stdout: python3-cryptography x86_64 36.0.1-5.el9 baseos 1.2 M 2026-03-10T09:54:44.035 INFO:teuthology.orchestra.run.vm08.stdout: python3-devel x86_64 3.9.25-3.el9 appstream 244 k 2026-03-10T09:54:44.035 INFO:teuthology.orchestra.run.vm08.stdout: python3-google-auth noarch 1:2.45.0-1.el9 epel 254 k 2026-03-10T09:54:44.035 INFO:teuthology.orchestra.run.vm08.stdout: python3-grpcio x86_64 1.46.7-10.el9 epel 2.0 M 2026-03-10T09:54:44.035 INFO:teuthology.orchestra.run.vm08.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 epel 144 k 2026-03-10T09:54:44.035 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco noarch 8.2.1-3.el9 epel 11 k 2026-03-10T09:54:44.035 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 epel 18 k 2026-03-10T09:54:44.035 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 epel 23 k 2026-03-10T09:54:44.035 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-context noarch 6.0.1-3.el9 epel 20 k 2026-03-10T09:54:44.035 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 epel 19 k 2026-03-10T09:54:44.036 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-text noarch 4.0.0-2.el9 epel 26 k 2026-03-10T09:54:44.036 INFO:teuthology.orchestra.run.vm08.stdout: python3-jinja2 noarch 2.11.3-8.el9 appstream 249 k 2026-03-10T09:54:44.036 INFO:teuthology.orchestra.run.vm08.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 epel 1.0 M 2026-03-10T09:54:44.036 INFO:teuthology.orchestra.run.vm08.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 appstream 177 k 2026-03-10T09:54:44.036 INFO:teuthology.orchestra.run.vm08.stdout: python3-logutils noarch 0.3.5-21.el9 epel 46 k 2026-03-10T09:54:44.036 INFO:teuthology.orchestra.run.vm08.stdout: python3-mako noarch 1.1.4-6.el9 appstream 172 k 2026-03-10T09:54:44.036 INFO:teuthology.orchestra.run.vm08.stdout: python3-markupsafe x86_64 1.1.1-12.el9 appstream 35 k 2026-03-10T09:54:44.036 INFO:teuthology.orchestra.run.vm08.stdout: python3-more-itertools noarch 8.12.0-2.el9 epel 79 k 2026-03-10T09:54:44.036 INFO:teuthology.orchestra.run.vm08.stdout: python3-natsort noarch 7.1.1-5.el9 epel 58 k 2026-03-10T09:54:44.036 INFO:teuthology.orchestra.run.vm08.stdout: python3-numpy x86_64 1:1.23.5-2.el9 appstream 6.1 M 2026-03-10T09:54:44.036 INFO:teuthology.orchestra.run.vm08.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 appstream 442 k 2026-03-10T09:54:44.036 INFO:teuthology.orchestra.run.vm08.stdout: python3-packaging noarch 20.9-5.el9 appstream 77 k 2026-03-10T09:54:44.036 INFO:teuthology.orchestra.run.vm08.stdout: python3-pecan noarch 1.4.2-3.el9 epel 272 k 2026-03-10T09:54:44.036 INFO:teuthology.orchestra.run.vm08.stdout: python3-ply noarch 3.11-14.el9 baseos 106 k 2026-03-10T09:54:44.036 INFO:teuthology.orchestra.run.vm08.stdout: python3-portend noarch 3.1.0-2.el9 epel 16 k 2026-03-10T09:54:44.036 INFO:teuthology.orchestra.run.vm08.stdout: python3-protobuf noarch 3.14.0-17.el9 appstream 267 k 2026-03-10T09:54:44.036 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 epel 90 k 2026-03-10T09:54:44.036 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyasn1 noarch 0.4.8-7.el9 appstream 157 k 2026-03-10T09:54:44.036 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 appstream 277 k 2026-03-10T09:54:44.036 INFO:teuthology.orchestra.run.vm08.stdout: python3-pycparser noarch 2.20-6.el9 baseos 135 k 2026-03-10T09:54:44.036 INFO:teuthology.orchestra.run.vm08.stdout: python3-repoze-lru noarch 0.7-16.el9 epel 31 k 2026-03-10T09:54:44.036 INFO:teuthology.orchestra.run.vm08.stdout: python3-requests noarch 2.25.1-10.el9 baseos 126 k 2026-03-10T09:54:44.036 INFO:teuthology.orchestra.run.vm08.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 appstream 54 k 2026-03-10T09:54:44.036 INFO:teuthology.orchestra.run.vm08.stdout: python3-routes noarch 2.5.1-5.el9 epel 188 k 2026-03-10T09:54:44.036 INFO:teuthology.orchestra.run.vm08.stdout: python3-rsa noarch 4.9-2.el9 epel 59 k 2026-03-10T09:54:44.036 INFO:teuthology.orchestra.run.vm08.stdout: python3-scipy x86_64 1.9.3-2.el9 appstream 19 M 2026-03-10T09:54:44.036 INFO:teuthology.orchestra.run.vm08.stdout: python3-tempora noarch 5.0.0-2.el9 epel 36 k 2026-03-10T09:54:44.036 INFO:teuthology.orchestra.run.vm08.stdout: python3-toml noarch 0.10.2-6.el9 appstream 42 k 2026-03-10T09:54:44.036 INFO:teuthology.orchestra.run.vm08.stdout: python3-typing-extensions noarch 4.15.0-1.el9 epel 86 k 2026-03-10T09:54:44.036 INFO:teuthology.orchestra.run.vm08.stdout: python3-urllib3 noarch 1.26.5-7.el9 baseos 218 k 2026-03-10T09:54:44.036 INFO:teuthology.orchestra.run.vm08.stdout: python3-webob noarch 1.8.8-2.el9 epel 230 k 2026-03-10T09:54:44.036 INFO:teuthology.orchestra.run.vm08.stdout: python3-websocket-client noarch 1.2.3-2.el9 epel 90 k 2026-03-10T09:54:44.036 INFO:teuthology.orchestra.run.vm08.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 epel 427 k 2026-03-10T09:54:44.036 INFO:teuthology.orchestra.run.vm08.stdout: python3-zc-lockfile noarch 2.0-10.el9 epel 20 k 2026-03-10T09:54:44.036 INFO:teuthology.orchestra.run.vm08.stdout: qatlib x86_64 25.08.0-2.el9 appstream 240 k 2026-03-10T09:54:44.036 INFO:teuthology.orchestra.run.vm08.stdout: qatzip-libs x86_64 1.3.1-1.el9 appstream 66 k 2026-03-10T09:54:44.036 INFO:teuthology.orchestra.run.vm08.stdout: re2 x86_64 1:20211101-20.el9 epel 191 k 2026-03-10T09:54:44.036 INFO:teuthology.orchestra.run.vm08.stdout: socat x86_64 1.7.4.1-8.el9 appstream 303 k 2026-03-10T09:54:44.036 INFO:teuthology.orchestra.run.vm08.stdout: thrift x86_64 0.15.0-4.el9 epel 1.6 M 2026-03-10T09:54:44.036 INFO:teuthology.orchestra.run.vm08.stdout: unzip x86_64 6.0-59.el9 baseos 182 k 2026-03-10T09:54:44.036 INFO:teuthology.orchestra.run.vm08.stdout: xmlstarlet x86_64 1.6.1-20.el9 appstream 64 k 2026-03-10T09:54:44.036 INFO:teuthology.orchestra.run.vm08.stdout: zip x86_64 3.0-35.el9 baseos 266 k 2026-03-10T09:54:44.036 INFO:teuthology.orchestra.run.vm08.stdout:Installing weak dependencies: 2026-03-10T09:54:44.036 INFO:teuthology.orchestra.run.vm08.stdout: qatlib-service x86_64 25.08.0-2.el9 appstream 37 k 2026-03-10T09:54:44.036 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:54:44.037 INFO:teuthology.orchestra.run.vm08.stdout:Transaction Summary 2026-03-10T09:54:44.037 INFO:teuthology.orchestra.run.vm08.stdout:====================================================================================== 2026-03-10T09:54:44.037 INFO:teuthology.orchestra.run.vm08.stdout:Install 134 Packages 2026-03-10T09:54:44.037 INFO:teuthology.orchestra.run.vm08.stdout:Upgrade 2 Packages 2026-03-10T09:54:44.037 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:54:44.037 INFO:teuthology.orchestra.run.vm08.stdout:Total download size: 210 M 2026-03-10T09:54:44.037 INFO:teuthology.orchestra.run.vm08.stdout:Downloading Packages: 2026-03-10T09:54:44.148 INFO:teuthology.orchestra.run.vm02.stdout:Package librados2-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-10T09:54:44.149 INFO:teuthology.orchestra.run.vm02.stdout:Package librbd1-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-10T09:54:44.153 INFO:teuthology.orchestra.run.vm02.stdout:Package bzip2-1.0.8-11.el9.x86_64 is already installed. 2026-03-10T09:54:44.154 INFO:teuthology.orchestra.run.vm02.stdout:Package perl-Test-Harness-1:3.42-461.el9.noarch is already installed. 2026-03-10T09:54:44.188 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T09:54:44.193 INFO:teuthology.orchestra.run.vm02.stdout:====================================================================================== 2026-03-10T09:54:44.193 INFO:teuthology.orchestra.run.vm02.stdout: Package Arch Version Repository Size 2026-03-10T09:54:44.193 INFO:teuthology.orchestra.run.vm02.stdout:====================================================================================== 2026-03-10T09:54:44.193 INFO:teuthology.orchestra.run.vm02.stdout:Installing: 2026-03-10T09:54:44.193 INFO:teuthology.orchestra.run.vm02.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 6.5 k 2026-03-10T09:54:44.193 INFO:teuthology.orchestra.run.vm02.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.5 M 2026-03-10T09:54:44.193 INFO:teuthology.orchestra.run.vm02.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.2 M 2026-03-10T09:54:44.193 INFO:teuthology.orchestra.run.vm02.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 145 k 2026-03-10T09:54:44.193 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.1 M 2026-03-10T09:54:44.193 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 150 k 2026-03-10T09:54:44.193 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 3.8 M 2026-03-10T09:54:44.193 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 7.4 M 2026-03-10T09:54:44.193 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 49 k 2026-03-10T09:54:44.193 INFO:teuthology.orchestra.run.vm02.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 11 M 2026-03-10T09:54:44.193 INFO:teuthology.orchestra.run.vm02.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 50 M 2026-03-10T09:54:44.193 INFO:teuthology.orchestra.run.vm02.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 299 k 2026-03-10T09:54:44.193 INFO:teuthology.orchestra.run.vm02.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 769 k 2026-03-10T09:54:44.193 INFO:teuthology.orchestra.run.vm02.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 34 k 2026-03-10T09:54:44.193 INFO:teuthology.orchestra.run.vm02.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.0 M 2026-03-10T09:54:44.193 INFO:teuthology.orchestra.run.vm02.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 127 k 2026-03-10T09:54:44.193 INFO:teuthology.orchestra.run.vm02.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 165 k 2026-03-10T09:54:44.193 INFO:teuthology.orchestra.run.vm02.stdout: python3-jmespath noarch 1.0.1-1.el9 appstream 48 k 2026-03-10T09:54:44.194 INFO:teuthology.orchestra.run.vm02.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 323 k 2026-03-10T09:54:44.194 INFO:teuthology.orchestra.run.vm02.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 303 k 2026-03-10T09:54:44.194 INFO:teuthology.orchestra.run.vm02.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 100 k 2026-03-10T09:54:44.194 INFO:teuthology.orchestra.run.vm02.stdout: python3-xmltodict noarch 0.12.0-15.el9 epel 22 k 2026-03-10T09:54:44.194 INFO:teuthology.orchestra.run.vm02.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 85 k 2026-03-10T09:54:44.194 INFO:teuthology.orchestra.run.vm02.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.1 M 2026-03-10T09:54:44.194 INFO:teuthology.orchestra.run.vm02.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 171 k 2026-03-10T09:54:44.194 INFO:teuthology.orchestra.run.vm02.stdout:Upgrading: 2026-03-10T09:54:44.194 INFO:teuthology.orchestra.run.vm02.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.4 M 2026-03-10T09:54:44.194 INFO:teuthology.orchestra.run.vm02.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.2 M 2026-03-10T09:54:44.194 INFO:teuthology.orchestra.run.vm02.stdout:Installing dependencies: 2026-03-10T09:54:44.194 INFO:teuthology.orchestra.run.vm02.stdout: abseil-cpp x86_64 20211102.0-4.el9 epel 551 k 2026-03-10T09:54:44.194 INFO:teuthology.orchestra.run.vm02.stdout: boost-program-options x86_64 1.75.0-13.el9 appstream 104 k 2026-03-10T09:54:44.194 INFO:teuthology.orchestra.run.vm02.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 22 M 2026-03-10T09:54:44.194 INFO:teuthology.orchestra.run.vm02.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 31 k 2026-03-10T09:54:44.194 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 2.4 M 2026-03-10T09:54:44.194 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 253 k 2026-03-10T09:54:44.194 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 4.7 M 2026-03-10T09:54:44.194 INFO:teuthology.orchestra.run.vm02.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 17 M 2026-03-10T09:54:44.194 INFO:teuthology.orchestra.run.vm02.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 17 k 2026-03-10T09:54:44.194 INFO:teuthology.orchestra.run.vm02.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 25 k 2026-03-10T09:54:44.194 INFO:teuthology.orchestra.run.vm02.stdout: cryptsetup x86_64 2.8.1-3.el9 baseos 351 k 2026-03-10T09:54:44.194 INFO:teuthology.orchestra.run.vm02.stdout: flexiblas x86_64 3.0.4-9.el9 appstream 30 k 2026-03-10T09:54:44.194 INFO:teuthology.orchestra.run.vm02.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 appstream 3.0 M 2026-03-10T09:54:44.194 INFO:teuthology.orchestra.run.vm02.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 appstream 15 k 2026-03-10T09:54:44.194 INFO:teuthology.orchestra.run.vm02.stdout: gperftools-libs x86_64 2.9.1-3.el9 epel 308 k 2026-03-10T09:54:44.194 INFO:teuthology.orchestra.run.vm02.stdout: grpc-data noarch 1.46.7-10.el9 epel 19 k 2026-03-10T09:54:44.194 INFO:teuthology.orchestra.run.vm02.stdout: ledmon-libs x86_64 1.1.0-3.el9 baseos 40 k 2026-03-10T09:54:44.194 INFO:teuthology.orchestra.run.vm02.stdout: libarrow x86_64 9.0.0-15.el9 epel 4.4 M 2026-03-10T09:54:44.194 INFO:teuthology.orchestra.run.vm02.stdout: libarrow-doc noarch 9.0.0-15.el9 epel 25 k 2026-03-10T09:54:44.194 INFO:teuthology.orchestra.run.vm02.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 163 k 2026-03-10T09:54:44.194 INFO:teuthology.orchestra.run.vm02.stdout: libconfig x86_64 1.7.2-9.el9 baseos 72 k 2026-03-10T09:54:44.194 INFO:teuthology.orchestra.run.vm02.stdout: libgfortran x86_64 11.5.0-14.el9 baseos 794 k 2026-03-10T09:54:44.194 INFO:teuthology.orchestra.run.vm02.stdout: libnbd x86_64 1.20.3-4.el9 appstream 164 k 2026-03-10T09:54:44.194 INFO:teuthology.orchestra.run.vm02.stdout: liboath x86_64 2.6.12-1.el9 epel 49 k 2026-03-10T09:54:44.194 INFO:teuthology.orchestra.run.vm02.stdout: libpmemobj x86_64 1.12.1-1.el9 appstream 160 k 2026-03-10T09:54:44.195 INFO:teuthology.orchestra.run.vm02.stdout: libquadmath x86_64 11.5.0-14.el9 baseos 184 k 2026-03-10T09:54:44.195 INFO:teuthology.orchestra.run.vm02.stdout: librabbitmq x86_64 0.11.0-7.el9 appstream 45 k 2026-03-10T09:54:44.195 INFO:teuthology.orchestra.run.vm02.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 503 k 2026-03-10T09:54:44.195 INFO:teuthology.orchestra.run.vm02.stdout: librdkafka x86_64 1.6.1-102.el9 appstream 662 k 2026-03-10T09:54:44.195 INFO:teuthology.orchestra.run.vm02.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.4 M 2026-03-10T09:54:44.195 INFO:teuthology.orchestra.run.vm02.stdout: libstoragemgmt x86_64 1.10.1-1.el9 appstream 246 k 2026-03-10T09:54:44.195 INFO:teuthology.orchestra.run.vm02.stdout: libunwind x86_64 1.6.2-1.el9 epel 67 k 2026-03-10T09:54:44.195 INFO:teuthology.orchestra.run.vm02.stdout: libxslt x86_64 1.1.34-12.el9 appstream 233 k 2026-03-10T09:54:44.195 INFO:teuthology.orchestra.run.vm02.stdout: lttng-ust x86_64 2.12.0-6.el9 appstream 292 k 2026-03-10T09:54:44.195 INFO:teuthology.orchestra.run.vm02.stdout: lua x86_64 5.4.4-4.el9 appstream 188 k 2026-03-10T09:54:44.195 INFO:teuthology.orchestra.run.vm02.stdout: lua-devel x86_64 5.4.4-4.el9 crb 22 k 2026-03-10T09:54:44.195 INFO:teuthology.orchestra.run.vm02.stdout: luarocks noarch 3.9.2-5.el9 epel 151 k 2026-03-10T09:54:44.195 INFO:teuthology.orchestra.run.vm02.stdout: mailcap noarch 2.1.49-5.el9 baseos 33 k 2026-03-10T09:54:44.195 INFO:teuthology.orchestra.run.vm02.stdout: openblas x86_64 0.3.29-1.el9 appstream 42 k 2026-03-10T09:54:44.195 INFO:teuthology.orchestra.run.vm02.stdout: openblas-openmp x86_64 0.3.29-1.el9 appstream 5.3 M 2026-03-10T09:54:44.195 INFO:teuthology.orchestra.run.vm02.stdout: parquet-libs x86_64 9.0.0-15.el9 epel 838 k 2026-03-10T09:54:44.195 INFO:teuthology.orchestra.run.vm02.stdout: pciutils x86_64 3.7.0-7.el9 baseos 93 k 2026-03-10T09:54:44.195 INFO:teuthology.orchestra.run.vm02.stdout: protobuf x86_64 3.14.0-17.el9 appstream 1.0 M 2026-03-10T09:54:44.195 INFO:teuthology.orchestra.run.vm02.stdout: protobuf-compiler x86_64 3.14.0-17.el9 crb 862 k 2026-03-10T09:54:44.195 INFO:teuthology.orchestra.run.vm02.stdout: python3-asyncssh noarch 2.13.2-5.el9 epel 548 k 2026-03-10T09:54:44.195 INFO:teuthology.orchestra.run.vm02.stdout: python3-autocommand noarch 2.2.2-8.el9 epel 29 k 2026-03-10T09:54:44.195 INFO:teuthology.orchestra.run.vm02.stdout: python3-babel noarch 2.9.1-2.el9 appstream 6.0 M 2026-03-10T09:54:44.195 INFO:teuthology.orchestra.run.vm02.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 epel 60 k 2026-03-10T09:54:44.195 INFO:teuthology.orchestra.run.vm02.stdout: python3-bcrypt x86_64 3.2.2-1.el9 epel 43 k 2026-03-10T09:54:44.195 INFO:teuthology.orchestra.run.vm02.stdout: python3-cachetools noarch 4.2.4-1.el9 epel 32 k 2026-03-10T09:54:44.195 INFO:teuthology.orchestra.run.vm02.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 45 k 2026-03-10T09:54:44.195 INFO:teuthology.orchestra.run.vm02.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 142 k 2026-03-10T09:54:44.195 INFO:teuthology.orchestra.run.vm02.stdout: python3-certifi noarch 2023.05.07-4.el9 epel 14 k 2026-03-10T09:54:44.195 INFO:teuthology.orchestra.run.vm02.stdout: python3-cffi x86_64 1.14.5-5.el9 baseos 253 k 2026-03-10T09:54:44.195 INFO:teuthology.orchestra.run.vm02.stdout: python3-cheroot noarch 10.0.1-4.el9 epel 173 k 2026-03-10T09:54:44.195 INFO:teuthology.orchestra.run.vm02.stdout: python3-cherrypy noarch 18.6.1-2.el9 epel 358 k 2026-03-10T09:54:44.195 INFO:teuthology.orchestra.run.vm02.stdout: python3-cryptography x86_64 36.0.1-5.el9 baseos 1.2 M 2026-03-10T09:54:44.195 INFO:teuthology.orchestra.run.vm02.stdout: python3-devel x86_64 3.9.25-3.el9 appstream 244 k 2026-03-10T09:54:44.195 INFO:teuthology.orchestra.run.vm02.stdout: python3-google-auth noarch 1:2.45.0-1.el9 epel 254 k 2026-03-10T09:54:44.195 INFO:teuthology.orchestra.run.vm02.stdout: python3-grpcio x86_64 1.46.7-10.el9 epel 2.0 M 2026-03-10T09:54:44.195 INFO:teuthology.orchestra.run.vm02.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 epel 144 k 2026-03-10T09:54:44.195 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco noarch 8.2.1-3.el9 epel 11 k 2026-03-10T09:54:44.195 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 epel 18 k 2026-03-10T09:54:44.195 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 epel 23 k 2026-03-10T09:54:44.195 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-context noarch 6.0.1-3.el9 epel 20 k 2026-03-10T09:54:44.195 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 epel 19 k 2026-03-10T09:54:44.196 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-text noarch 4.0.0-2.el9 epel 26 k 2026-03-10T09:54:44.196 INFO:teuthology.orchestra.run.vm02.stdout: python3-jinja2 noarch 2.11.3-8.el9 appstream 249 k 2026-03-10T09:54:44.196 INFO:teuthology.orchestra.run.vm02.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 epel 1.0 M 2026-03-10T09:54:44.196 INFO:teuthology.orchestra.run.vm02.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 appstream 177 k 2026-03-10T09:54:44.196 INFO:teuthology.orchestra.run.vm02.stdout: python3-logutils noarch 0.3.5-21.el9 epel 46 k 2026-03-10T09:54:44.196 INFO:teuthology.orchestra.run.vm02.stdout: python3-mako noarch 1.1.4-6.el9 appstream 172 k 2026-03-10T09:54:44.196 INFO:teuthology.orchestra.run.vm02.stdout: python3-markupsafe x86_64 1.1.1-12.el9 appstream 35 k 2026-03-10T09:54:44.196 INFO:teuthology.orchestra.run.vm02.stdout: python3-more-itertools noarch 8.12.0-2.el9 epel 79 k 2026-03-10T09:54:44.196 INFO:teuthology.orchestra.run.vm02.stdout: python3-natsort noarch 7.1.1-5.el9 epel 58 k 2026-03-10T09:54:44.196 INFO:teuthology.orchestra.run.vm02.stdout: python3-numpy x86_64 1:1.23.5-2.el9 appstream 6.1 M 2026-03-10T09:54:44.196 INFO:teuthology.orchestra.run.vm02.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 appstream 442 k 2026-03-10T09:54:44.196 INFO:teuthology.orchestra.run.vm02.stdout: python3-packaging noarch 20.9-5.el9 appstream 77 k 2026-03-10T09:54:44.196 INFO:teuthology.orchestra.run.vm02.stdout: python3-pecan noarch 1.4.2-3.el9 epel 272 k 2026-03-10T09:54:44.196 INFO:teuthology.orchestra.run.vm02.stdout: python3-ply noarch 3.11-14.el9 baseos 106 k 2026-03-10T09:54:44.196 INFO:teuthology.orchestra.run.vm02.stdout: python3-portend noarch 3.1.0-2.el9 epel 16 k 2026-03-10T09:54:44.196 INFO:teuthology.orchestra.run.vm02.stdout: python3-protobuf noarch 3.14.0-17.el9 appstream 267 k 2026-03-10T09:54:44.196 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 epel 90 k 2026-03-10T09:54:44.196 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyasn1 noarch 0.4.8-7.el9 appstream 157 k 2026-03-10T09:54:44.196 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 appstream 277 k 2026-03-10T09:54:44.196 INFO:teuthology.orchestra.run.vm02.stdout: python3-pycparser noarch 2.20-6.el9 baseos 135 k 2026-03-10T09:54:44.196 INFO:teuthology.orchestra.run.vm02.stdout: python3-repoze-lru noarch 0.7-16.el9 epel 31 k 2026-03-10T09:54:44.196 INFO:teuthology.orchestra.run.vm02.stdout: python3-requests noarch 2.25.1-10.el9 baseos 126 k 2026-03-10T09:54:44.196 INFO:teuthology.orchestra.run.vm02.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 appstream 54 k 2026-03-10T09:54:44.196 INFO:teuthology.orchestra.run.vm02.stdout: python3-routes noarch 2.5.1-5.el9 epel 188 k 2026-03-10T09:54:44.196 INFO:teuthology.orchestra.run.vm02.stdout: python3-rsa noarch 4.9-2.el9 epel 59 k 2026-03-10T09:54:44.196 INFO:teuthology.orchestra.run.vm02.stdout: python3-scipy x86_64 1.9.3-2.el9 appstream 19 M 2026-03-10T09:54:44.196 INFO:teuthology.orchestra.run.vm02.stdout: python3-tempora noarch 5.0.0-2.el9 epel 36 k 2026-03-10T09:54:44.196 INFO:teuthology.orchestra.run.vm02.stdout: python3-toml noarch 0.10.2-6.el9 appstream 42 k 2026-03-10T09:54:44.196 INFO:teuthology.orchestra.run.vm02.stdout: python3-typing-extensions noarch 4.15.0-1.el9 epel 86 k 2026-03-10T09:54:44.196 INFO:teuthology.orchestra.run.vm02.stdout: python3-urllib3 noarch 1.26.5-7.el9 baseos 218 k 2026-03-10T09:54:44.196 INFO:teuthology.orchestra.run.vm02.stdout: python3-webob noarch 1.8.8-2.el9 epel 230 k 2026-03-10T09:54:44.196 INFO:teuthology.orchestra.run.vm02.stdout: python3-websocket-client noarch 1.2.3-2.el9 epel 90 k 2026-03-10T09:54:44.196 INFO:teuthology.orchestra.run.vm02.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 epel 427 k 2026-03-10T09:54:44.196 INFO:teuthology.orchestra.run.vm02.stdout: python3-zc-lockfile noarch 2.0-10.el9 epel 20 k 2026-03-10T09:54:44.196 INFO:teuthology.orchestra.run.vm02.stdout: qatlib x86_64 25.08.0-2.el9 appstream 240 k 2026-03-10T09:54:44.196 INFO:teuthology.orchestra.run.vm02.stdout: qatzip-libs x86_64 1.3.1-1.el9 appstream 66 k 2026-03-10T09:54:44.196 INFO:teuthology.orchestra.run.vm02.stdout: re2 x86_64 1:20211101-20.el9 epel 191 k 2026-03-10T09:54:44.196 INFO:teuthology.orchestra.run.vm02.stdout: socat x86_64 1.7.4.1-8.el9 appstream 303 k 2026-03-10T09:54:44.196 INFO:teuthology.orchestra.run.vm02.stdout: thrift x86_64 0.15.0-4.el9 epel 1.6 M 2026-03-10T09:54:44.196 INFO:teuthology.orchestra.run.vm02.stdout: unzip x86_64 6.0-59.el9 baseos 182 k 2026-03-10T09:54:44.196 INFO:teuthology.orchestra.run.vm02.stdout: xmlstarlet x86_64 1.6.1-20.el9 appstream 64 k 2026-03-10T09:54:44.196 INFO:teuthology.orchestra.run.vm02.stdout: zip x86_64 3.0-35.el9 baseos 266 k 2026-03-10T09:54:44.196 INFO:teuthology.orchestra.run.vm02.stdout:Installing weak dependencies: 2026-03-10T09:54:44.196 INFO:teuthology.orchestra.run.vm02.stdout: qatlib-service x86_64 25.08.0-2.el9 appstream 37 k 2026-03-10T09:54:44.197 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:54:44.197 INFO:teuthology.orchestra.run.vm02.stdout:Transaction Summary 2026-03-10T09:54:44.197 INFO:teuthology.orchestra.run.vm02.stdout:====================================================================================== 2026-03-10T09:54:44.197 INFO:teuthology.orchestra.run.vm02.stdout:Install 134 Packages 2026-03-10T09:54:44.197 INFO:teuthology.orchestra.run.vm02.stdout:Upgrade 2 Packages 2026-03-10T09:54:44.197 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:54:44.197 INFO:teuthology.orchestra.run.vm02.stdout:Total download size: 210 M 2026-03-10T09:54:44.197 INFO:teuthology.orchestra.run.vm02.stdout:Downloading Packages: 2026-03-10T09:54:44.837 INFO:teuthology.orchestra.run.vm01.stdout:(1/136): ceph-19.2.3-678.ge911bdeb.el9.x86_64.r 14 kB/s | 6.5 kB 00:00 2026-03-10T09:54:45.126 INFO:teuthology.orchestra.run.vm02.stdout:(1/136): ceph-19.2.3-678.ge911bdeb.el9.x86_64.r 13 kB/s | 6.5 kB 00:00 2026-03-10T09:54:45.665 INFO:teuthology.orchestra.run.vm01.stdout:(2/136): ceph-fuse-19.2.3-678.ge911bdeb.el9.x86 1.4 MB/s | 1.2 MB 00:00 2026-03-10T09:54:45.786 INFO:teuthology.orchestra.run.vm01.stdout:(3/136): ceph-immutable-object-cache-19.2.3-678 1.2 MB/s | 145 kB 00:00 2026-03-10T09:54:45.963 INFO:teuthology.orchestra.run.vm02.stdout:(2/136): ceph-fuse-19.2.3-678.ge911bdeb.el9.x86 1.4 MB/s | 1.2 MB 00:00 2026-03-10T09:54:46.081 INFO:teuthology.orchestra.run.vm08.stdout:(1/136): ceph-19.2.3-678.ge911bdeb.el9.x86_64.r 14 kB/s | 6.5 kB 00:00 2026-03-10T09:54:46.084 INFO:teuthology.orchestra.run.vm02.stdout:(3/136): ceph-immutable-object-cache-19.2.3-678 1.2 MB/s | 145 kB 00:00 2026-03-10T09:54:46.117 INFO:teuthology.orchestra.run.vm01.stdout:(4/136): ceph-base-19.2.3-678.ge911bdeb.el9.x86 3.1 MB/s | 5.5 MB 00:01 2026-03-10T09:54:46.267 INFO:teuthology.orchestra.run.vm01.stdout:(5/136): ceph-mds-19.2.3-678.ge911bdeb.el9.x86_ 5.0 MB/s | 2.4 MB 00:00 2026-03-10T09:54:46.295 INFO:teuthology.orchestra.run.vm02.stdout:(4/136): ceph-base-19.2.3-678.ge911bdeb.el9.x86 3.3 MB/s | 5.5 MB 00:01 2026-03-10T09:54:46.355 INFO:teuthology.orchestra.run.vm01.stdout:(6/136): ceph-mgr-19.2.3-678.ge911bdeb.el9.x86_ 4.5 MB/s | 1.1 MB 00:00 2026-03-10T09:54:46.517 INFO:teuthology.orchestra.run.vm02.stdout:(5/136): ceph-mgr-19.2.3-678.ge911bdeb.el9.x86_ 4.8 MB/s | 1.1 MB 00:00 2026-03-10T09:54:46.568 INFO:teuthology.orchestra.run.vm02.stdout:(6/136): ceph-mds-19.2.3-678.ge911bdeb.el9.x86_ 5.0 MB/s | 2.4 MB 00:00 2026-03-10T09:54:46.929 INFO:teuthology.orchestra.run.vm01.stdout:(7/136): ceph-mon-19.2.3-678.ge911bdeb.el9.x86_ 7.2 MB/s | 4.7 MB 00:00 2026-03-10T09:54:47.214 INFO:teuthology.orchestra.run.vm02.stdout:(7/136): ceph-mon-19.2.3-678.ge911bdeb.el9.x86_ 6.8 MB/s | 4.7 MB 00:00 2026-03-10T09:54:47.214 INFO:teuthology.orchestra.run.vm08.stdout:(2/136): ceph-fuse-19.2.3-678.ge911bdeb.el9.x86 1.0 MB/s | 1.2 MB 00:01 2026-03-10T09:54:47.391 INFO:teuthology.orchestra.run.vm08.stdout:(3/136): ceph-immutable-object-cache-19.2.3-678 823 kB/s | 145 kB 00:00 2026-03-10T09:54:47.883 INFO:teuthology.orchestra.run.vm08.stdout:(4/136): ceph-base-19.2.3-678.ge911bdeb.el9.x86 2.4 MB/s | 5.5 MB 00:02 2026-03-10T09:54:48.036 INFO:teuthology.orchestra.run.vm01.stdout:(8/136): ceph-common-19.2.3-678.ge911bdeb.el9.x 5.9 MB/s | 22 MB 00:03 2026-03-10T09:54:48.126 INFO:teuthology.orchestra.run.vm08.stdout:(5/136): ceph-mgr-19.2.3-678.ge911bdeb.el9.x86_ 4.4 MB/s | 1.1 MB 00:00 2026-03-10T09:54:48.163 INFO:teuthology.orchestra.run.vm01.stdout:(9/136): ceph-selinux-19.2.3-678.ge911bdeb.el9. 199 kB/s | 25 kB 00:00 2026-03-10T09:54:48.345 INFO:teuthology.orchestra.run.vm02.stdout:(8/136): ceph-common-19.2.3-678.ge911bdeb.el9.x 5.9 MB/s | 22 MB 00:03 2026-03-10T09:54:48.434 INFO:teuthology.orchestra.run.vm08.stdout:(6/136): ceph-mds-19.2.3-678.ge911bdeb.el9.x86_ 2.3 MB/s | 2.4 MB 00:01 2026-03-10T09:54:48.456 INFO:teuthology.orchestra.run.vm02.stdout:(9/136): ceph-selinux-19.2.3-678.ge911bdeb.el9. 226 kB/s | 25 kB 00:00 2026-03-10T09:54:48.559 INFO:teuthology.orchestra.run.vm01.stdout:(10/136): ceph-osd-19.2.3-678.ge911bdeb.el9.x86 7.7 MB/s | 17 MB 00:02 2026-03-10T09:54:48.638 INFO:teuthology.orchestra.run.vm01.stdout:(11/136): ceph-radosgw-19.2.3-678.ge911bdeb.el9 6.3 MB/s | 11 MB 00:01 2026-03-10T09:54:48.677 INFO:teuthology.orchestra.run.vm01.stdout:(12/136): libcephfs-devel-19.2.3-678.ge911bdeb. 286 kB/s | 34 kB 00:00 2026-03-10T09:54:48.795 INFO:teuthology.orchestra.run.vm01.stdout:(13/136): libcephsqlite-19.2.3-678.ge911bdeb.el 1.4 MB/s | 163 kB 00:00 2026-03-10T09:54:48.841 INFO:teuthology.orchestra.run.vm08.stdout:(7/136): ceph-mon-19.2.3-678.ge911bdeb.el9.x86_ 6.6 MB/s | 4.7 MB 00:00 2026-03-10T09:54:48.892 INFO:teuthology.orchestra.run.vm02.stdout:(10/136): ceph-radosgw-19.2.3-678.ge911bdeb.el9 6.4 MB/s | 11 MB 00:01 2026-03-10T09:54:48.893 INFO:teuthology.orchestra.run.vm01.stdout:(14/136): libcephfs2-19.2.3-678.ge911bdeb.el9.x 3.8 MB/s | 1.0 MB 00:00 2026-03-10T09:54:48.913 INFO:teuthology.orchestra.run.vm01.stdout:(15/136): librados-devel-19.2.3-678.ge911bdeb.e 1.0 MB/s | 127 kB 00:00 2026-03-10T09:54:49.003 INFO:teuthology.orchestra.run.vm02.stdout:(11/136): libcephfs-devel-19.2.3-678.ge911bdeb. 304 kB/s | 34 kB 00:00 2026-03-10T09:54:49.019 INFO:teuthology.orchestra.run.vm01.stdout:(16/136): libradosstriper1-19.2.3-678.ge911bdeb 3.9 MB/s | 503 kB 00:00 2026-03-10T09:54:49.069 INFO:teuthology.orchestra.run.vm02.stdout:(12/136): ceph-osd-19.2.3-678.ge911bdeb.el9.x86 6.8 MB/s | 17 MB 00:02 2026-03-10T09:54:49.139 INFO:teuthology.orchestra.run.vm01.stdout:(17/136): python3-ceph-argparse-19.2.3-678.ge91 379 kB/s | 45 kB 00:00 2026-03-10T09:54:49.195 INFO:teuthology.orchestra.run.vm02.stdout:(13/136): libcephsqlite-19.2.3-678.ge911bdeb.el 1.3 MB/s | 163 kB 00:00 2026-03-10T09:54:49.226 INFO:teuthology.orchestra.run.vm02.stdout:(14/136): libcephfs2-19.2.3-678.ge911bdeb.el9.x 4.4 MB/s | 1.0 MB 00:00 2026-03-10T09:54:49.260 INFO:teuthology.orchestra.run.vm01.stdout:(18/136): python3-ceph-common-19.2.3-678.ge911b 1.1 MB/s | 142 kB 00:00 2026-03-10T09:54:49.319 INFO:teuthology.orchestra.run.vm02.stdout:(15/136): librados-devel-19.2.3-678.ge911bdeb.e 1.0 MB/s | 127 kB 00:00 2026-03-10T09:54:49.348 INFO:teuthology.orchestra.run.vm02.stdout:(16/136): libradosstriper1-19.2.3-678.ge911bdeb 4.1 MB/s | 503 kB 00:00 2026-03-10T09:54:49.381 INFO:teuthology.orchestra.run.vm01.stdout:(19/136): python3-cephfs-19.2.3-678.ge911bdeb.e 1.3 MB/s | 165 kB 00:00 2026-03-10T09:54:49.475 INFO:teuthology.orchestra.run.vm02.stdout:(17/136): python3-ceph-argparse-19.2.3-678.ge91 356 kB/s | 45 kB 00:00 2026-03-10T09:54:49.506 INFO:teuthology.orchestra.run.vm01.stdout:(20/136): python3-rados-19.2.3-678.ge911bdeb.el 2.6 MB/s | 323 kB 00:00 2026-03-10T09:54:49.599 INFO:teuthology.orchestra.run.vm02.stdout:(18/136): python3-ceph-common-19.2.3-678.ge911b 1.1 MB/s | 142 kB 00:00 2026-03-10T09:54:49.638 INFO:teuthology.orchestra.run.vm01.stdout:(21/136): python3-rbd-19.2.3-678.ge911bdeb.el9. 2.2 MB/s | 303 kB 00:00 2026-03-10T09:54:49.726 INFO:teuthology.orchestra.run.vm02.stdout:(19/136): python3-cephfs-19.2.3-678.ge911bdeb.e 1.3 MB/s | 165 kB 00:00 2026-03-10T09:54:49.749 INFO:teuthology.orchestra.run.vm01.stdout:(22/136): librgw2-19.2.3-678.ge911bdeb.el9.x86_ 6.5 MB/s | 5.4 MB 00:00 2026-03-10T09:54:49.758 INFO:teuthology.orchestra.run.vm01.stdout:(23/136): python3-rgw-19.2.3-678.ge911bdeb.el9. 838 kB/s | 100 kB 00:00 2026-03-10T09:54:49.840 INFO:teuthology.orchestra.run.vm02.stdout:(20/136): python3-rados-19.2.3-678.ge911bdeb.el 2.8 MB/s | 323 kB 00:00 2026-03-10T09:54:49.867 INFO:teuthology.orchestra.run.vm01.stdout:(24/136): rbd-fuse-19.2.3-678.ge911bdeb.el9.x86 724 kB/s | 85 kB 00:00 2026-03-10T09:54:49.936 INFO:teuthology.orchestra.run.vm08.stdout:(8/136): ceph-radosgw-19.2.3-678.ge911bdeb.el9. 9.8 MB/s | 11 MB 00:01 2026-03-10T09:54:49.954 INFO:teuthology.orchestra.run.vm02.stdout:(21/136): python3-rbd-19.2.3-678.ge911bdeb.el9. 2.6 MB/s | 303 kB 00:00 2026-03-10T09:54:49.985 INFO:teuthology.orchestra.run.vm01.stdout:(25/136): rbd-nbd-19.2.3-678.ge911bdeb.el9.x86_ 1.4 MB/s | 171 kB 00:00 2026-03-10T09:54:50.056 INFO:teuthology.orchestra.run.vm08.stdout:(9/136): ceph-selinux-19.2.3-678.ge911bdeb.el9. 210 kB/s | 25 kB 00:00 2026-03-10T09:54:50.078 INFO:teuthology.orchestra.run.vm02.stdout:(22/136): librgw2-19.2.3-678.ge911bdeb.el9.x86_ 7.1 MB/s | 5.4 MB 00:00 2026-03-10T09:54:50.079 INFO:teuthology.orchestra.run.vm02.stdout:(23/136): python3-rgw-19.2.3-678.ge911bdeb.el9. 797 kB/s | 100 kB 00:00 2026-03-10T09:54:50.124 INFO:teuthology.orchestra.run.vm01.stdout:(26/136): ceph-grafana-dashboards-19.2.3-678.ge 224 kB/s | 31 kB 00:00 2026-03-10T09:54:50.125 INFO:teuthology.orchestra.run.vm08.stdout:(10/136): ceph-common-19.2.3-678.ge911bdeb.el9. 4.8 MB/s | 22 MB 00:04 2026-03-10T09:54:50.223 INFO:teuthology.orchestra.run.vm02.stdout:(24/136): rbd-fuse-19.2.3-678.ge911bdeb.el9.x86 586 kB/s | 85 kB 00:00 2026-03-10T09:54:50.247 INFO:teuthology.orchestra.run.vm01.stdout:(27/136): rbd-mirror-19.2.3-678.ge911bdeb.el9.x 6.4 MB/s | 3.1 MB 00:00 2026-03-10T09:54:50.247 INFO:teuthology.orchestra.run.vm08.stdout:(11/136): libcephfs-devel-19.2.3-678.ge911bdeb. 274 kB/s | 34 kB 00:00 2026-03-10T09:54:50.248 INFO:teuthology.orchestra.run.vm01.stdout:(28/136): ceph-mgr-cephadm-19.2.3-678.ge911bdeb 1.2 MB/s | 150 kB 00:00 2026-03-10T09:54:50.346 INFO:teuthology.orchestra.run.vm02.stdout:(25/136): rbd-nbd-19.2.3-678.ge911bdeb.el9.x86_ 1.4 MB/s | 171 kB 00:00 2026-03-10T09:54:50.500 INFO:teuthology.orchestra.run.vm02.stdout:(26/136): ceph-grafana-dashboards-19.2.3-678.ge 202 kB/s | 31 kB 00:00 2026-03-10T09:54:50.505 INFO:teuthology.orchestra.run.vm08.stdout:(12/136): libcephfs2-19.2.3-678.ge911bdeb.el9.x 3.8 MB/s | 1.0 MB 00:00 2026-03-10T09:54:50.629 INFO:teuthology.orchestra.run.vm02.stdout:(27/136): rbd-mirror-19.2.3-678.ge911bdeb.el9.x 5.7 MB/s | 3.1 MB 00:00 2026-03-10T09:54:50.637 INFO:teuthology.orchestra.run.vm02.stdout:(28/136): ceph-mgr-cephadm-19.2.3-678.ge911bdeb 1.1 MB/s | 150 kB 00:00 2026-03-10T09:54:50.642 INFO:teuthology.orchestra.run.vm08.stdout:(13/136): libcephsqlite-19.2.3-678.ge911bdeb.el 1.2 MB/s | 163 kB 00:00 2026-03-10T09:54:50.811 INFO:teuthology.orchestra.run.vm08.stdout:(14/136): librados-devel-19.2.3-678.ge911bdeb.e 754 kB/s | 127 kB 00:00 2026-03-10T09:54:50.955 INFO:teuthology.orchestra.run.vm01.stdout:(29/136): ceph-mgr-dashboard-19.2.3-678.ge911bd 5.4 MB/s | 3.8 MB 00:00 2026-03-10T09:54:50.983 INFO:teuthology.orchestra.run.vm08.stdout:(15/136): libradosstriper1-19.2.3-678.ge911bdeb 2.9 MB/s | 503 kB 00:00 2026-03-10T09:54:51.120 INFO:teuthology.orchestra.run.vm01.stdout:(30/136): ceph-mgr-modules-core-19.2.3-678.ge91 1.5 MB/s | 253 kB 00:00 2026-03-10T09:54:51.285 INFO:teuthology.orchestra.run.vm01.stdout:(31/136): ceph-mgr-rook-19.2.3-678.ge911bdeb.el 298 kB/s | 49 kB 00:00 2026-03-10T09:54:51.416 INFO:teuthology.orchestra.run.vm02.stdout:(29/136): ceph-mgr-dashboard-19.2.3-678.ge911bd 4.8 MB/s | 3.8 MB 00:00 2026-03-10T09:54:51.435 INFO:teuthology.orchestra.run.vm01.stdout:(32/136): ceph-prometheus-alerts-19.2.3-678.ge9 112 kB/s | 17 kB 00:00 2026-03-10T09:54:51.454 INFO:teuthology.orchestra.run.vm08.stdout:(16/136): ceph-osd-19.2.3-678.ge911bdeb.el9.x86 5.7 MB/s | 17 MB 00:03 2026-03-10T09:54:51.472 INFO:teuthology.orchestra.run.vm01.stdout:(33/136): ceph-mgr-diskprediction-local-19.2.3- 6.0 MB/s | 7.4 MB 00:01 2026-03-10T09:54:51.527 INFO:teuthology.orchestra.run.vm02.stdout:(30/136): ceph-mgr-modules-core-19.2.3-678.ge91 2.2 MB/s | 253 kB 00:00 2026-03-10T09:54:51.558 INFO:teuthology.orchestra.run.vm01.stdout:(34/136): ceph-volume-19.2.3-678.ge911bdeb.el9. 2.4 MB/s | 299 kB 00:00 2026-03-10T09:54:51.567 INFO:teuthology.orchestra.run.vm08.stdout:(17/136): python3-ceph-argparse-19.2.3-678.ge91 399 kB/s | 45 kB 00:00 2026-03-10T09:54:51.608 INFO:teuthology.orchestra.run.vm01.stdout:(35/136): cephadm-19.2.3-678.ge911bdeb.el9.noar 5.5 MB/s | 769 kB 00:00 2026-03-10T09:54:51.637 INFO:teuthology.orchestra.run.vm02.stdout:(31/136): ceph-mgr-rook-19.2.3-678.ge911bdeb.el 449 kB/s | 49 kB 00:00 2026-03-10T09:54:51.685 INFO:teuthology.orchestra.run.vm08.stdout:(18/136): python3-ceph-common-19.2.3-678.ge911b 1.2 MB/s | 142 kB 00:00 2026-03-10T09:54:51.694 INFO:teuthology.orchestra.run.vm02.stdout:(32/136): ceph-mgr-diskprediction-local-19.2.3- 7.0 MB/s | 7.4 MB 00:01 2026-03-10T09:54:51.748 INFO:teuthology.orchestra.run.vm02.stdout:(33/136): ceph-prometheus-alerts-19.2.3-678.ge9 152 kB/s | 17 kB 00:00 2026-03-10T09:54:51.763 INFO:teuthology.orchestra.run.vm01.stdout:(36/136): ledmon-libs-1.1.0-3.el9.x86_64.rpm 261 kB/s | 40 kB 00:00 2026-03-10T09:54:51.801 INFO:teuthology.orchestra.run.vm08.stdout:(19/136): python3-cephfs-19.2.3-678.ge911bdeb.e 1.4 MB/s | 165 kB 00:00 2026-03-10T09:54:51.822 INFO:teuthology.orchestra.run.vm02.stdout:(34/136): ceph-volume-19.2.3-678.ge911bdeb.el9. 2.3 MB/s | 299 kB 00:00 2026-03-10T09:54:51.868 INFO:teuthology.orchestra.run.vm01.stdout:(37/136): libconfig-1.7.2-9.el9.x86_64.rpm 692 kB/s | 72 kB 00:00 2026-03-10T09:54:51.870 INFO:teuthology.orchestra.run.vm01.stdout:(38/136): cryptsetup-2.8.1-3.el9.x86_64.rpm 1.1 MB/s | 351 kB 00:00 2026-03-10T09:54:51.919 INFO:teuthology.orchestra.run.vm08.stdout:(20/136): python3-rados-19.2.3-678.ge911bdeb.el 2.7 MB/s | 323 kB 00:00 2026-03-10T09:54:51.937 INFO:teuthology.orchestra.run.vm08.stdout:(21/136): librgw2-19.2.3-678.ge911bdeb.el9.x86_ 5.7 MB/s | 5.4 MB 00:00 2026-03-10T09:54:51.937 INFO:teuthology.orchestra.run.vm01.stdout:(39/136): libquadmath-11.5.0-14.el9.x86_64.rpm 2.7 MB/s | 184 kB 00:00 2026-03-10T09:54:51.971 INFO:teuthology.orchestra.run.vm02.stdout:(35/136): cephadm-19.2.3-678.ge911bdeb.el9.noar 3.4 MB/s | 769 kB 00:00 2026-03-10T09:54:51.987 INFO:teuthology.orchestra.run.vm01.stdout:(40/136): mailcap-2.1.49-5.el9.noarch.rpm 669 kB/s | 33 kB 00:00 2026-03-10T09:54:52.033 INFO:teuthology.orchestra.run.vm01.stdout:(41/136): libgfortran-11.5.0-14.el9.x86_64.rpm 4.7 MB/s | 794 kB 00:00 2026-03-10T09:54:52.038 INFO:teuthology.orchestra.run.vm08.stdout:(22/136): python3-rbd-19.2.3-678.ge911bdeb.el9. 2.5 MB/s | 303 kB 00:00 2026-03-10T09:54:52.047 INFO:teuthology.orchestra.run.vm01.stdout:(42/136): pciutils-3.7.0-7.el9.x86_64.rpm 1.5 MB/s | 93 kB 00:00 2026-03-10T09:54:52.057 INFO:teuthology.orchestra.run.vm08.stdout:(23/136): python3-rgw-19.2.3-678.ge911bdeb.el9. 835 kB/s | 100 kB 00:00 2026-03-10T09:54:52.103 INFO:teuthology.orchestra.run.vm01.stdout:(43/136): python3-cffi-1.14.5-5.el9.x86_64.rpm 3.5 MB/s | 253 kB 00:00 2026-03-10T09:54:52.154 INFO:teuthology.orchestra.run.vm08.stdout:(24/136): rbd-fuse-19.2.3-678.ge911bdeb.el9.x86 736 kB/s | 85 kB 00:00 2026-03-10T09:54:52.172 INFO:teuthology.orchestra.run.vm01.stdout:(44/136): python3-ply-3.11-14.el9.noarch.rpm 1.5 MB/s | 106 kB 00:00 2026-03-10T09:54:52.207 INFO:teuthology.orchestra.run.vm01.stdout:(45/136): python3-cryptography-36.0.1-5.el9.x86 7.8 MB/s | 1.2 MB 00:00 2026-03-10T09:54:52.235 INFO:teuthology.orchestra.run.vm01.stdout:(46/136): python3-pycparser-2.20-6.el9.noarch.r 2.1 MB/s | 135 kB 00:00 2026-03-10T09:54:52.269 INFO:teuthology.orchestra.run.vm08.stdout:(25/136): rbd-nbd-19.2.3-678.ge911bdeb.el9.x86_ 1.5 MB/s | 171 kB 00:00 2026-03-10T09:54:52.273 INFO:teuthology.orchestra.run.vm01.stdout:(47/136): python3-requests-2.25.1-10.el9.noarch 1.9 MB/s | 126 kB 00:00 2026-03-10T09:54:52.283 INFO:teuthology.orchestra.run.vm02.stdout:(36/136): ledmon-libs-1.1.0-3.el9.x86_64.rpm 130 kB/s | 40 kB 00:00 2026-03-10T09:54:52.305 INFO:teuthology.orchestra.run.vm01.stdout:(48/136): python3-urllib3-1.26.5-7.el9.noarch.r 3.0 MB/s | 218 kB 00:00 2026-03-10T09:54:52.333 INFO:teuthology.orchestra.run.vm02.stdout:(37/136): cryptsetup-2.8.1-3.el9.x86_64.rpm 688 kB/s | 351 kB 00:00 2026-03-10T09:54:52.348 INFO:teuthology.orchestra.run.vm01.stdout:(49/136): unzip-6.0-59.el9.x86_64.rpm 2.4 MB/s | 182 kB 00:00 2026-03-10T09:54:52.378 INFO:teuthology.orchestra.run.vm01.stdout:(50/136): zip-3.0-35.el9.x86_64.rpm 3.6 MB/s | 266 kB 00:00 2026-03-10T09:54:52.382 INFO:teuthology.orchestra.run.vm08.stdout:(26/136): ceph-grafana-dashboards-19.2.3-678.ge 276 kB/s | 31 kB 00:00 2026-03-10T09:54:52.434 INFO:teuthology.orchestra.run.vm02.stdout:(38/136): libconfig-1.7.2-9.el9.x86_64.rpm 480 kB/s | 72 kB 00:00 2026-03-10T09:54:52.492 INFO:teuthology.orchestra.run.vm02.stdout:(39/136): libgfortran-11.5.0-14.el9.x86_64.rpm 4.9 MB/s | 794 kB 00:00 2026-03-10T09:54:52.492 INFO:teuthology.orchestra.run.vm01.stdout:(51/136): flexiblas-3.0.4-9.el9.x86_64.rpm 260 kB/s | 30 kB 00:00 2026-03-10T09:54:52.513 INFO:teuthology.orchestra.run.vm08.stdout:(27/136): ceph-mgr-cephadm-19.2.3-678.ge911bdeb 1.1 MB/s | 150 kB 00:00 2026-03-10T09:54:52.526 INFO:teuthology.orchestra.run.vm01.stdout:(52/136): boost-program-options-1.75.0-13.el9.x 585 kB/s | 104 kB 00:00 2026-03-10T09:54:52.573 INFO:teuthology.orchestra.run.vm02.stdout:(40/136): mailcap-2.1.49-5.el9.noarch.rpm 410 kB/s | 33 kB 00:00 2026-03-10T09:54:52.573 INFO:teuthology.orchestra.run.vm08.stdout:(28/136): rbd-mirror-19.2.3-678.ge911bdeb.el9.x 6.0 MB/s | 3.1 MB 00:00 2026-03-10T09:54:52.585 INFO:teuthology.orchestra.run.vm02.stdout:(41/136): libquadmath-11.5.0-14.el9.x86_64.rpm 1.2 MB/s | 184 kB 00:00 2026-03-10T09:54:52.590 INFO:teuthology.orchestra.run.vm01.stdout:(53/136): flexiblas-openblas-openmp-3.0.4-9.el9 232 kB/s | 15 kB 00:00 2026-03-10T09:54:52.669 INFO:teuthology.orchestra.run.vm02.stdout:(42/136): pciutils-3.7.0-7.el9.x86_64.rpm 981 kB/s | 93 kB 00:00 2026-03-10T09:54:52.734 INFO:teuthology.orchestra.run.vm01.stdout:(54/136): libnbd-1.20.3-4.el9.x86_64.rpm 1.1 MB/s | 164 kB 00:00 2026-03-10T09:54:52.736 INFO:teuthology.orchestra.run.vm02.stdout:(43/136): python3-cffi-1.14.5-5.el9.x86_64.rpm 1.6 MB/s | 253 kB 00:00 2026-03-10T09:54:52.789 INFO:teuthology.orchestra.run.vm01.stdout:(55/136): libpmemobj-1.12.1-1.el9.x86_64.rpm 2.9 MB/s | 160 kB 00:00 2026-03-10T09:54:52.820 INFO:teuthology.orchestra.run.vm02.stdout:(44/136): python3-ply-3.11-14.el9.noarch.rpm 1.2 MB/s | 106 kB 00:00 2026-03-10T09:54:52.831 INFO:teuthology.orchestra.run.vm02.stdout:(45/136): python3-cryptography-36.0.1-5.el9.x86 7.7 MB/s | 1.2 MB 00:00 2026-03-10T09:54:52.839 INFO:teuthology.orchestra.run.vm01.stdout:(56/136): librabbitmq-0.11.0-7.el9.x86_64.rpm 890 kB/s | 45 kB 00:00 2026-03-10T09:54:52.905 INFO:teuthology.orchestra.run.vm02.stdout:(46/136): python3-pycparser-2.20-6.el9.noarch.r 1.6 MB/s | 135 kB 00:00 2026-03-10T09:54:52.915 INFO:teuthology.orchestra.run.vm02.stdout:(47/136): python3-requests-2.25.1-10.el9.noarch 1.5 MB/s | 126 kB 00:00 2026-03-10T09:54:52.944 INFO:teuthology.orchestra.run.vm01.stdout:(57/136): librdkafka-1.6.1-102.el9.x86_64.rpm 6.2 MB/s | 662 kB 00:00 2026-03-10T09:54:52.991 INFO:teuthology.orchestra.run.vm02.stdout:(48/136): python3-urllib3-1.26.5-7.el9.noarch.r 2.5 MB/s | 218 kB 00:00 2026-03-10T09:54:52.994 INFO:teuthology.orchestra.run.vm01.stdout:(58/136): libstoragemgmt-1.10.1-1.el9.x86_64.rp 4.8 MB/s | 246 kB 00:00 2026-03-10T09:54:52.997 INFO:teuthology.orchestra.run.vm02.stdout:(49/136): unzip-6.0-59.el9.x86_64.rpm 2.2 MB/s | 182 kB 00:00 2026-03-10T09:54:53.005 INFO:teuthology.orchestra.run.vm08.stdout:(29/136): ceph-mgr-dashboard-19.2.3-678.ge911bd 7.7 MB/s | 3.8 MB 00:00 2026-03-10T09:54:53.030 INFO:teuthology.orchestra.run.vm01.stdout:(59/136): libxslt-1.1.34-12.el9.x86_64.rpm 6.4 MB/s | 233 kB 00:00 2026-03-10T09:54:53.080 INFO:teuthology.orchestra.run.vm02.stdout:(50/136): zip-3.0-35.el9.x86_64.rpm 2.9 MB/s | 266 kB 00:00 2026-03-10T09:54:53.097 INFO:teuthology.orchestra.run.vm01.stdout:(60/136): lttng-ust-2.12.0-6.el9.x86_64.rpm 4.3 MB/s | 292 kB 00:00 2026-03-10T09:54:53.113 INFO:teuthology.orchestra.run.vm01.stdout:(61/136): flexiblas-netlib-3.0.4-9.el9.x86_64.r 4.8 MB/s | 3.0 MB 00:00 2026-03-10T09:54:53.139 INFO:teuthology.orchestra.run.vm08.stdout:(30/136): ceph-mgr-modules-core-19.2.3-678.ge91 1.8 MB/s | 253 kB 00:00 2026-03-10T09:54:53.159 INFO:teuthology.orchestra.run.vm01.stdout:(62/136): openblas-0.3.29-1.el9.x86_64.rpm 929 kB/s | 42 kB 00:00 2026-03-10T09:54:53.181 INFO:teuthology.orchestra.run.vm01.stdout:(63/136): lua-5.4.4-4.el9.x86_64.rpm 2.3 MB/s | 188 kB 00:00 2026-03-10T09:54:53.181 INFO:teuthology.orchestra.run.vm02.stdout:(51/136): boost-program-options-1.75.0-13.el9.x 566 kB/s | 104 kB 00:00 2026-03-10T09:54:53.195 INFO:teuthology.orchestra.run.vm02.stdout:(52/136): flexiblas-3.0.4-9.el9.x86_64.rpm 257 kB/s | 30 kB 00:00 2026-03-10T09:54:53.258 INFO:teuthology.orchestra.run.vm02.stdout:(53/136): flexiblas-openblas-openmp-3.0.4-9.el9 239 kB/s | 15 kB 00:00 2026-03-10T09:54:53.280 INFO:teuthology.orchestra.run.vm08.stdout:(31/136): ceph-mgr-rook-19.2.3-678.ge911bdeb.el 350 kB/s | 49 kB 00:00 2026-03-10T09:54:53.355 INFO:teuthology.orchestra.run.vm01.stdout:(64/136): protobuf-3.14.0-17.el9.x86_64.rpm 5.7 MB/s | 1.0 MB 00:00 2026-03-10T09:54:53.378 INFO:teuthology.orchestra.run.vm02.stdout:(54/136): libnbd-1.20.3-4.el9.x86_64.rpm 1.3 MB/s | 164 kB 00:00 2026-03-10T09:54:53.425 INFO:teuthology.orchestra.run.vm08.stdout:(32/136): ceph-prometheus-alerts-19.2.3-678.ge9 115 kB/s | 17 kB 00:00 2026-03-10T09:54:53.440 INFO:teuthology.orchestra.run.vm02.stdout:(55/136): libpmemobj-1.12.1-1.el9.x86_64.rpm 2.5 MB/s | 160 kB 00:00 2026-03-10T09:54:53.492 INFO:teuthology.orchestra.run.vm02.stdout:(56/136): librabbitmq-0.11.0-7.el9.x86_64.rpm 888 kB/s | 45 kB 00:00 2026-03-10T09:54:53.538 INFO:teuthology.orchestra.run.vm02.stdout:(57/136): flexiblas-netlib-3.0.4-9.el9.x86_64.r 8.4 MB/s | 3.0 MB 00:00 2026-03-10T09:54:53.575 INFO:teuthology.orchestra.run.vm08.stdout:(33/136): ceph-volume-19.2.3-678.ge911bdeb.el9. 2.0 MB/s | 299 kB 00:00 2026-03-10T09:54:53.579 INFO:teuthology.orchestra.run.vm02.stdout:(58/136): librdkafka-1.6.1-102.el9.x86_64.rpm 7.4 MB/s | 662 kB 00:00 2026-03-10T09:54:53.606 INFO:teuthology.orchestra.run.vm02.stdout:(59/136): libstoragemgmt-1.10.1-1.el9.x86_64.rp 3.6 MB/s | 246 kB 00:00 2026-03-10T09:54:53.642 INFO:teuthology.orchestra.run.vm02.stdout:(60/136): libxslt-1.1.34-12.el9.x86_64.rpm 3.7 MB/s | 233 kB 00:00 2026-03-10T09:54:53.685 INFO:teuthology.orchestra.run.vm02.stdout:(61/136): lttng-ust-2.12.0-6.el9.x86_64.rpm 3.6 MB/s | 292 kB 00:00 2026-03-10T09:54:53.700 INFO:teuthology.orchestra.run.vm02.stdout:(62/136): lua-5.4.4-4.el9.x86_64.rpm 3.2 MB/s | 188 kB 00:00 2026-03-10T09:54:53.732 INFO:teuthology.orchestra.run.vm08.stdout:(34/136): cephadm-19.2.3-678.ge911bdeb.el9.noar 4.8 MB/s | 769 kB 00:00 2026-03-10T09:54:53.735 INFO:teuthology.orchestra.run.vm02.stdout:(63/136): openblas-0.3.29-1.el9.x86_64.rpm 843 kB/s | 42 kB 00:00 2026-03-10T09:54:53.802 INFO:teuthology.orchestra.run.vm08.stdout:(35/136): ceph-mgr-diskprediction-local-19.2.3- 6.0 MB/s | 7.4 MB 00:01 2026-03-10T09:54:53.885 INFO:teuthology.orchestra.run.vm02.stdout:(64/136): protobuf-3.14.0-17.el9.x86_64.rpm 6.7 MB/s | 1.0 MB 00:00 2026-03-10T09:54:53.911 INFO:teuthology.orchestra.run.vm01.stdout:(65/136): openblas-openmp-0.3.29-1.el9.x86_64.r 7.0 MB/s | 5.3 MB 00:00 2026-03-10T09:54:53.919 INFO:teuthology.orchestra.run.vm08.stdout:(36/136): ledmon-libs-1.1.0-3.el9.x86_64.rpm 347 kB/s | 40 kB 00:00 2026-03-10T09:54:53.971 INFO:teuthology.orchestra.run.vm08.stdout:(37/136): cryptsetup-2.8.1-3.el9.x86_64.rpm 1.4 MB/s | 351 kB 00:00 2026-03-10T09:54:54.003 INFO:teuthology.orchestra.run.vm01.stdout:(66/136): python3-devel-3.9.25-3.el9.x86_64.rpm 2.6 MB/s | 244 kB 00:00 2026-03-10T09:54:54.073 INFO:teuthology.orchestra.run.vm02.stdout:(65/136): openblas-openmp-0.3.29-1.el9.x86_64.r 14 MB/s | 5.3 MB 00:00 2026-03-10T09:54:54.093 INFO:teuthology.orchestra.run.vm02.stdout:(66/136): python3-babel-2.9.1-2.el9.noarch.rpm 29 MB/s | 6.0 MB 00:00 2026-03-10T09:54:54.094 INFO:teuthology.orchestra.run.vm01.stdout:(67/136): python3-jinja2-2.11.3-8.el9.noarch.rp 2.7 MB/s | 249 kB 00:00 2026-03-10T09:54:54.124 INFO:teuthology.orchestra.run.vm02.stdout:(67/136): python3-devel-3.9.25-3.el9.x86_64.rpm 4.7 MB/s | 244 kB 00:00 2026-03-10T09:54:54.139 INFO:teuthology.orchestra.run.vm02.stdout:(68/136): python3-jinja2-2.11.3-8.el9.noarch.rp 5.4 MB/s | 249 kB 00:00 2026-03-10T09:54:54.139 INFO:teuthology.orchestra.run.vm01.stdout:(68/136): python3-jmespath-1.0.1-1.el9.noarch.r 1.0 MB/s | 48 kB 00:00 2026-03-10T09:54:54.169 INFO:teuthology.orchestra.run.vm02.stdout:(69/136): python3-jmespath-1.0.1-1.el9.noarch.r 1.0 MB/s | 48 kB 00:00 2026-03-10T09:54:54.187 INFO:teuthology.orchestra.run.vm02.stdout:(70/136): python3-libstoragemgmt-1.10.1-1.el9.x 3.6 MB/s | 177 kB 00:00 2026-03-10T09:54:54.236 INFO:teuthology.orchestra.run.vm02.stdout:(71/136): python3-mako-1.1.4-6.el9.noarch.rpm 2.5 MB/s | 172 kB 00:00 2026-03-10T09:54:54.254 INFO:teuthology.orchestra.run.vm02.stdout:(72/136): python3-markupsafe-1.1.1-12.el9.x86_6 522 kB/s | 35 kB 00:00 2026-03-10T09:54:54.264 INFO:teuthology.orchestra.run.vm01.stdout:(69/136): python3-libstoragemgmt-1.10.1-1.el9.x 1.4 MB/s | 177 kB 00:00 2026-03-10T09:54:54.304 INFO:teuthology.orchestra.run.vm01.stdout:(70/136): python3-babel-2.9.1-2.el9.noarch.rpm 6.3 MB/s | 6.0 MB 00:00 2026-03-10T09:54:54.359 INFO:teuthology.orchestra.run.vm02.stdout:(73/136): python3-numpy-f2py-1.23.5-2.el9.x86_6 4.1 MB/s | 442 kB 00:00 2026-03-10T09:54:54.373 INFO:teuthology.orchestra.run.vm01.stdout:(71/136): python3-markupsafe-1.1.1-12.el9.x86_6 504 kB/s | 35 kB 00:00 2026-03-10T09:54:54.405 INFO:teuthology.orchestra.run.vm02.stdout:(74/136): python3-packaging-20.9-5.el9.noarch.r 1.7 MB/s | 77 kB 00:00 2026-03-10T09:54:54.421 INFO:teuthology.orchestra.run.vm02.stdout:(75/136): python3-numpy-1.23.5-2.el9.x86_64.rpm 33 MB/s | 6.1 MB 00:00 2026-03-10T09:54:54.456 INFO:teuthology.orchestra.run.vm02.stdout:(76/136): python3-protobuf-3.14.0-17.el9.noarch 5.1 MB/s | 267 kB 00:00 2026-03-10T09:54:54.471 INFO:teuthology.orchestra.run.vm02.stdout:(77/136): python3-pyasn1-0.4.8-7.el9.noarch.rpm 3.1 MB/s | 157 kB 00:00 2026-03-10T09:54:54.499 INFO:teuthology.orchestra.run.vm01.stdout:(72/136): python3-mako-1.1.4-6.el9.noarch.rpm 734 kB/s | 172 kB 00:00 2026-03-10T09:54:54.510 INFO:teuthology.orchestra.run.vm02.stdout:(78/136): python3-pyasn1-modules-0.4.8-7.el9.no 5.1 MB/s | 277 kB 00:00 2026-03-10T09:54:54.550 INFO:teuthology.orchestra.run.vm02.stdout:(79/136): python3-requests-oauthlib-1.3.0-12.el 680 kB/s | 54 kB 00:00 2026-03-10T09:54:54.620 INFO:teuthology.orchestra.run.vm01.stdout:(73/136): python3-numpy-f2py-1.23.5-2.el9.x86_6 3.6 MB/s | 442 kB 00:00 2026-03-10T09:54:54.628 INFO:teuthology.orchestra.run.vm08.stdout:(38/136): libconfig-1.7.2-9.el9.x86_64.rpm 102 kB/s | 72 kB 00:00 2026-03-10T09:54:54.675 INFO:teuthology.orchestra.run.vm01.stdout:(74/136): python3-packaging-20.9-5.el9.noarch.r 1.4 MB/s | 77 kB 00:00 2026-03-10T09:54:54.683 INFO:teuthology.orchestra.run.vm08.stdout:(39/136): libgfortran-11.5.0-14.el9.x86_64.rpm 1.1 MB/s | 794 kB 00:00 2026-03-10T09:54:54.687 INFO:teuthology.orchestra.run.vm02.stdout:(80/136): python3-toml-0.10.2-6.el9.noarch.rpm 305 kB/s | 42 kB 00:00 2026-03-10T09:54:54.740 INFO:teuthology.orchestra.run.vm02.stdout:(81/136): qatlib-25.08.0-2.el9.x86_64.rpm 4.5 MB/s | 240 kB 00:00 2026-03-10T09:54:54.814 INFO:teuthology.orchestra.run.vm02.stdout:(82/136): qatlib-service-25.08.0-2.el9.x86_64.r 504 kB/s | 37 kB 00:00 2026-03-10T09:54:54.814 INFO:teuthology.orchestra.run.vm08.stdout:(40/136): libquadmath-11.5.0-14.el9.x86_64.rpm 996 kB/s | 184 kB 00:00 2026-03-10T09:54:54.833 INFO:teuthology.orchestra.run.vm01.stdout:(75/136): ceph-test-19.2.3-678.ge911bdeb.el9.x8 7.4 MB/s | 50 MB 00:06 2026-03-10T09:54:54.873 INFO:teuthology.orchestra.run.vm02.stdout:(83/136): qatzip-libs-1.3.1-1.el9.x86_64.rpm 1.1 MB/s | 66 kB 00:00 2026-03-10T09:54:54.879 INFO:teuthology.orchestra.run.vm01.stdout:(76/136): python3-protobuf-3.14.0-17.el9.noarch 1.3 MB/s | 267 kB 00:00 2026-03-10T09:54:54.909 INFO:teuthology.orchestra.run.vm08.stdout:(41/136): mailcap-2.1.49-5.el9.noarch.rpm 147 kB/s | 33 kB 00:00 2026-03-10T09:54:54.916 INFO:teuthology.orchestra.run.vm08.stdout:(42/136): pciutils-3.7.0-7.el9.x86_64.rpm 912 kB/s | 93 kB 00:00 2026-03-10T09:54:54.952 INFO:teuthology.orchestra.run.vm08.stdout:(43/136): python3-cffi-1.14.5-5.el9.x86_64.rpm 5.8 MB/s | 253 kB 00:00 2026-03-10T09:54:54.957 INFO:teuthology.orchestra.run.vm02.stdout:(84/136): socat-1.7.4.1-8.el9.x86_64.rpm 3.6 MB/s | 303 kB 00:00 2026-03-10T09:54:54.963 INFO:teuthology.orchestra.run.vm01.stdout:(77/136): python3-pyasn1-modules-0.4.8-7.el9.no 3.2 MB/s | 277 kB 00:00 2026-03-10T09:54:54.996 INFO:teuthology.orchestra.run.vm08.stdout:(44/136): python3-ply-3.11-14.el9.noarch.rpm 2.4 MB/s | 106 kB 00:00 2026-03-10T09:54:55.009 INFO:teuthology.orchestra.run.vm01.stdout:(78/136): python3-requests-oauthlib-1.3.0-12.el 1.1 MB/s | 54 kB 00:00 2026-03-10T09:54:55.020 INFO:teuthology.orchestra.run.vm08.stdout:(45/136): python3-cryptography-36.0.1-5.el9.x86 12 MB/s | 1.2 MB 00:00 2026-03-10T09:54:55.025 INFO:teuthology.orchestra.run.vm02.stdout:(85/136): xmlstarlet-1.6.1-20.el9.x86_64.rpm 932 kB/s | 64 kB 00:00 2026-03-10T09:54:55.062 INFO:teuthology.orchestra.run.vm01.stdout:(79/136): python3-pyasn1-0.4.8-7.el9.noarch.rpm 691 kB/s | 157 kB 00:00 2026-03-10T09:54:55.073 INFO:teuthology.orchestra.run.vm08.stdout:(46/136): python3-pycparser-2.20-6.el9.noarch.r 1.7 MB/s | 135 kB 00:00 2026-03-10T09:54:55.074 INFO:teuthology.orchestra.run.vm08.stdout:(47/136): python3-requests-2.25.1-10.el9.noarch 2.3 MB/s | 126 kB 00:00 2026-03-10T09:54:55.111 INFO:teuthology.orchestra.run.vm01.stdout:(80/136): python3-toml-0.10.2-6.el9.noarch.rpm 843 kB/s | 42 kB 00:00 2026-03-10T09:54:55.112 INFO:teuthology.orchestra.run.vm08.stdout:(48/136): python3-urllib3-1.26.5-7.el9.noarch.r 5.4 MB/s | 218 kB 00:00 2026-03-10T09:54:55.114 INFO:teuthology.orchestra.run.vm08.stdout:(49/136): unzip-6.0-59.el9.x86_64.rpm 4.4 MB/s | 182 kB 00:00 2026-03-10T09:54:55.145 INFO:teuthology.orchestra.run.vm08.stdout:(50/136): zip-3.0-35.el9.x86_64.rpm 8.0 MB/s | 266 kB 00:00 2026-03-10T09:54:55.215 INFO:teuthology.orchestra.run.vm02.stdout:(86/136): lua-devel-5.4.4-4.el9.x86_64.rpm 118 kB/s | 22 kB 00:00 2026-03-10T09:54:55.234 INFO:teuthology.orchestra.run.vm01.stdout:(81/136): qatlib-25.08.0-2.el9.x86_64.rpm 1.9 MB/s | 240 kB 00:00 2026-03-10T09:54:55.304 INFO:teuthology.orchestra.run.vm01.stdout:(82/136): qatlib-service-25.08.0-2.el9.x86_64.r 529 kB/s | 37 kB 00:00 2026-03-10T09:54:55.354 INFO:teuthology.orchestra.run.vm02.stdout:(87/136): protobuf-compiler-3.14.0-17.el9.x86_6 6.1 MB/s | 862 kB 00:00 2026-03-10T09:54:55.358 INFO:teuthology.orchestra.run.vm01.stdout:(83/136): qatzip-libs-1.3.1-1.el9.x86_64.rpm 1.2 MB/s | 66 kB 00:00 2026-03-10T09:54:55.376 INFO:teuthology.orchestra.run.vm02.stdout:(88/136): abseil-cpp-20211102.0-4.el9.x86_64.rp 25 MB/s | 551 kB 00:00 2026-03-10T09:54:55.389 INFO:teuthology.orchestra.run.vm01.stdout:(84/136): python3-numpy-1.23.5-2.el9.x86_64.rpm 6.0 MB/s | 6.1 MB 00:01 2026-03-10T09:54:55.389 INFO:teuthology.orchestra.run.vm02.stdout:(89/136): gperftools-libs-2.9.1-3.el9.x86_64.rp 25 MB/s | 308 kB 00:00 2026-03-10T09:54:55.392 INFO:teuthology.orchestra.run.vm02.stdout:(90/136): grpc-data-1.46.7-10.el9.noarch.rpm 8.2 MB/s | 19 kB 00:00 2026-03-10T09:54:55.499 INFO:teuthology.orchestra.run.vm01.stdout:(85/136): xmlstarlet-1.6.1-20.el9.x86_64.rpm 580 kB/s | 64 kB 00:00 2026-03-10T09:54:55.505 INFO:teuthology.orchestra.run.vm02.stdout:(91/136): libarrow-9.0.0-15.el9.x86_64.rpm 39 MB/s | 4.4 MB 00:00 2026-03-10T09:54:55.508 INFO:teuthology.orchestra.run.vm01.stdout:(86/136): socat-1.7.4.1-8.el9.x86_64.rpm 2.0 MB/s | 303 kB 00:00 2026-03-10T09:54:55.509 INFO:teuthology.orchestra.run.vm08.stdout:(51/136): ceph-test-19.2.3-678.ge911bdeb.el9.x8 9.1 MB/s | 50 MB 00:05 2026-03-10T09:54:55.509 INFO:teuthology.orchestra.run.vm02.stdout:(92/136): libarrow-doc-9.0.0-15.el9.noarch.rpm 8.7 MB/s | 25 kB 00:00 2026-03-10T09:54:55.512 INFO:teuthology.orchestra.run.vm02.stdout:(93/136): liboath-2.6.12-1.el9.x86_64.rpm 17 MB/s | 49 kB 00:00 2026-03-10T09:54:55.515 INFO:teuthology.orchestra.run.vm02.stdout:(94/136): libunwind-1.6.2-1.el9.x86_64.rpm 23 MB/s | 67 kB 00:00 2026-03-10T09:54:55.520 INFO:teuthology.orchestra.run.vm02.stdout:(95/136): luarocks-3.9.2-5.el9.noarch.rpm 37 MB/s | 151 kB 00:00 2026-03-10T09:54:55.536 INFO:teuthology.orchestra.run.vm02.stdout:(96/136): parquet-libs-9.0.0-15.el9.x86_64.rpm 51 MB/s | 838 kB 00:00 2026-03-10T09:54:55.601 INFO:teuthology.orchestra.run.vm02.stdout:(97/136): python3-scipy-1.9.3-2.el9.x86_64.rpm 18 MB/s | 19 MB 00:01 2026-03-10T09:54:55.606 INFO:teuthology.orchestra.run.vm02.stdout:(98/136): python3-asyncssh-2.13.2-5.el9.noarch. 7.7 MB/s | 548 kB 00:00 2026-03-10T09:54:55.606 INFO:teuthology.orchestra.run.vm02.stdout:(99/136): python3-autocommand-2.2.2-8.el9.noarc 6.0 MB/s | 29 kB 00:00 2026-03-10T09:54:55.609 INFO:teuthology.orchestra.run.vm02.stdout:(100/136): python3-backports-tarfile-1.2.0-1.el 22 MB/s | 60 kB 00:00 2026-03-10T09:54:55.610 INFO:teuthology.orchestra.run.vm02.stdout:(101/136): python3-bcrypt-3.2.2-1.el9.x86_64.rp 11 MB/s | 43 kB 00:00 2026-03-10T09:54:55.611 INFO:teuthology.orchestra.run.vm02.stdout:(102/136): python3-cachetools-4.2.4-1.el9.noarc 12 MB/s | 32 kB 00:00 2026-03-10T09:54:55.611 INFO:teuthology.orchestra.run.vm01.stdout:(87/136): lua-devel-5.4.4-4.el9.x86_64.rpm 199 kB/s | 22 kB 00:00 2026-03-10T09:54:55.619 INFO:teuthology.orchestra.run.vm02.stdout:(103/136): python3-cheroot-10.0.1-4.el9.noarch. 25 MB/s | 173 kB 00:00 2026-03-10T09:54:55.624 INFO:teuthology.orchestra.run.vm02.stdout:(104/136): python3-certifi-2023.05.07-4.el9.noa 1.0 MB/s | 14 kB 00:00 2026-03-10T09:54:55.627 INFO:teuthology.orchestra.run.vm02.stdout:(105/136): python3-cherrypy-18.6.1-2.el9.noarch 43 MB/s | 358 kB 00:00 2026-03-10T09:54:55.643 INFO:teuthology.orchestra.run.vm01.stdout:(88/136): abseil-cpp-20211102.0-4.el9.x86_64.rp 17 MB/s | 551 kB 00:00 2026-03-10T09:54:55.651 INFO:teuthology.orchestra.run.vm02.stdout:(106/136): python3-google-auth-2.45.0-1.el9.noa 9.1 MB/s | 254 kB 00:00 2026-03-10T09:54:55.659 INFO:teuthology.orchestra.run.vm02.stdout:(107/136): python3-grpcio-1.46.7-10.el9.x86_64. 64 MB/s | 2.0 MB 00:00 2026-03-10T09:54:55.659 INFO:teuthology.orchestra.run.vm01.stdout:(89/136): gperftools-libs-2.9.1-3.el9.x86_64.rp 20 MB/s | 308 kB 00:00 2026-03-10T09:54:55.660 INFO:teuthology.orchestra.run.vm02.stdout:(108/136): python3-grpcio-tools-1.46.7-10.el9.x 17 MB/s | 144 kB 00:00 2026-03-10T09:54:55.660 INFO:teuthology.orchestra.run.vm02.stdout:(109/136): python3-jaraco-8.2.1-3.el9.noarch.rp 5.3 MB/s | 11 kB 00:00 2026-03-10T09:54:55.661 INFO:teuthology.orchestra.run.vm01.stdout:(90/136): grpc-data-1.46.7-10.el9.noarch.rpm 9.5 MB/s | 19 kB 00:00 2026-03-10T09:54:55.662 INFO:teuthology.orchestra.run.vm02.stdout:(110/136): python3-jaraco-classes-3.2.1-5.el9.n 7.3 MB/s | 18 kB 00:00 2026-03-10T09:54:55.663 INFO:teuthology.orchestra.run.vm02.stdout:(111/136): python3-jaraco-collections-3.0.0-8.e 10 MB/s | 23 kB 00:00 2026-03-10T09:54:55.665 INFO:teuthology.orchestra.run.vm02.stdout:(112/136): python3-jaraco-context-6.0.1-3.el9.n 6.5 MB/s | 20 kB 00:00 2026-03-10T09:54:55.667 INFO:teuthology.orchestra.run.vm02.stdout:(113/136): python3-jaraco-functools-3.5.0-2.el9 4.4 MB/s | 19 kB 00:00 2026-03-10T09:54:55.668 INFO:teuthology.orchestra.run.vm02.stdout:(114/136): python3-jaraco-text-4.0.0-2.el9.noar 10 MB/s | 26 kB 00:00 2026-03-10T09:54:55.675 INFO:teuthology.orchestra.run.vm02.stdout:(115/136): python3-logutils-0.3.5-21.el9.noarch 6.5 MB/s | 46 kB 00:00 2026-03-10T09:54:55.685 INFO:teuthology.orchestra.run.vm02.stdout:(116/136): python3-kubernetes-26.1.0-3.el9.noar 60 MB/s | 1.0 MB 00:00 2026-03-10T09:54:55.687 INFO:teuthology.orchestra.run.vm02.stdout:(117/136): python3-more-itertools-8.12.0-2.el9. 6.6 MB/s | 79 kB 00:00 2026-03-10T09:54:55.688 INFO:teuthology.orchestra.run.vm02.stdout:(118/136): python3-natsort-7.1.1-5.el9.noarch.r 15 MB/s | 58 kB 00:00 2026-03-10T09:54:55.689 INFO:teuthology.orchestra.run.vm08.stdout:(52/136): flexiblas-3.0.4-9.el9.x86_64.rpm 54 kB/s | 30 kB 00:00 2026-03-10T09:54:55.692 INFO:teuthology.orchestra.run.vm02.stdout:(119/136): python3-portend-3.1.0-2.el9.noarch.r 5.0 MB/s | 16 kB 00:00 2026-03-10T09:54:55.696 INFO:teuthology.orchestra.run.vm02.stdout:(120/136): python3-pyOpenSSL-21.0.0-1.el9.noarc 20 MB/s | 90 kB 00:00 2026-03-10T09:54:55.701 INFO:teuthology.orchestra.run.vm02.stdout:(121/136): python3-pecan-1.4.2-3.el9.noarch.rpm 19 MB/s | 272 kB 00:00 2026-03-10T09:54:55.702 INFO:teuthology.orchestra.run.vm02.stdout:(122/136): python3-repoze-lru-0.7-16.el9.noarch 5.4 MB/s | 31 kB 00:00 2026-03-10T09:54:55.708 INFO:teuthology.orchestra.run.vm02.stdout:(123/136): python3-rsa-4.9-2.el9.noarch.rpm 11 MB/s | 59 kB 00:00 2026-03-10T09:54:55.713 INFO:teuthology.orchestra.run.vm02.stdout:(124/136): python3-tempora-5.0.0-2.el9.noarch.r 7.5 MB/s | 36 kB 00:00 2026-03-10T09:54:55.715 INFO:teuthology.orchestra.run.vm02.stdout:(125/136): python3-routes-2.5.1-5.el9.noarch.rp 14 MB/s | 188 kB 00:00 2026-03-10T09:54:55.718 INFO:teuthology.orchestra.run.vm02.stdout:(126/136): python3-typing-extensions-4.15.0-1.e 18 MB/s | 86 kB 00:00 2026-03-10T09:54:55.724 INFO:teuthology.orchestra.run.vm02.stdout:(127/136): python3-websocket-client-1.2.3-2.el9 15 MB/s | 90 kB 00:00 2026-03-10T09:54:55.730 INFO:teuthology.orchestra.run.vm02.stdout:(128/136): python3-webob-1.8.8-2.el9.noarch.rpm 15 MB/s | 230 kB 00:00 2026-03-10T09:54:55.735 INFO:teuthology.orchestra.run.vm02.stdout:(129/136): python3-werkzeug-2.0.3-3.el9.1.noarc 43 MB/s | 427 kB 00:00 2026-03-10T09:54:55.737 INFO:teuthology.orchestra.run.vm02.stdout:(130/136): python3-xmltodict-0.12.0-15.el9.noar 3.4 MB/s | 22 kB 00:00 2026-03-10T09:54:55.737 INFO:teuthology.orchestra.run.vm02.stdout:(131/136): python3-zc-lockfile-2.0-10.el9.noarc 7.3 MB/s | 20 kB 00:00 2026-03-10T09:54:55.750 INFO:teuthology.orchestra.run.vm01.stdout:(91/136): libarrow-9.0.0-15.el9.x86_64.rpm 50 MB/s | 4.4 MB 00:00 2026-03-10T09:54:55.760 INFO:teuthology.orchestra.run.vm02.stdout:(132/136): re2-20211101-20.el9.x86_64.rpm 8.1 MB/s | 191 kB 00:00 2026-03-10T09:54:55.773 INFO:teuthology.orchestra.run.vm02.stdout:(133/136): thrift-0.15.0-4.el9.x86_64.rpm 45 MB/s | 1.6 MB 00:00 2026-03-10T09:54:55.774 INFO:teuthology.orchestra.run.vm08.stdout:(53/136): flexiblas-openblas-openmp-3.0.4-9.el9 174 kB/s | 15 kB 00:00 2026-03-10T09:54:55.776 INFO:teuthology.orchestra.run.vm08.stdout:(54/136): boost-program-options-1.75.0-13.el9.x 157 kB/s | 104 kB 00:00 2026-03-10T09:54:55.778 INFO:teuthology.orchestra.run.vm01.stdout:(92/136): libarrow-doc-9.0.0-15.el9.noarch.rpm 901 kB/s | 25 kB 00:00 2026-03-10T09:54:55.781 INFO:teuthology.orchestra.run.vm01.stdout:(93/136): liboath-2.6.12-1.el9.x86_64.rpm 15 MB/s | 49 kB 00:00 2026-03-10T09:54:55.785 INFO:teuthology.orchestra.run.vm01.stdout:(94/136): libunwind-1.6.2-1.el9.x86_64.rpm 23 MB/s | 67 kB 00:00 2026-03-10T09:54:55.789 INFO:teuthology.orchestra.run.vm01.stdout:(95/136): protobuf-compiler-3.14.0-17.el9.x86_6 3.0 MB/s | 862 kB 00:00 2026-03-10T09:54:55.790 INFO:teuthology.orchestra.run.vm01.stdout:(96/136): luarocks-3.9.2-5.el9.noarch.rpm 27 MB/s | 151 kB 00:00 2026-03-10T09:54:55.801 INFO:teuthology.orchestra.run.vm01.stdout:(97/136): python3-asyncssh-2.13.2-5.el9.noarch. 51 MB/s | 548 kB 00:00 2026-03-10T09:54:55.804 INFO:teuthology.orchestra.run.vm01.stdout:(98/136): python3-autocommand-2.2.2-8.el9.noarc 13 MB/s | 29 kB 00:00 2026-03-10T09:54:55.807 INFO:teuthology.orchestra.run.vm01.stdout:(99/136): python3-backports-tarfile-1.2.0-1.el9 24 MB/s | 60 kB 00:00 2026-03-10T09:54:55.809 INFO:teuthology.orchestra.run.vm01.stdout:(100/136): python3-bcrypt-3.2.2-1.el9.x86_64.rp 16 MB/s | 43 kB 00:00 2026-03-10T09:54:55.812 INFO:teuthology.orchestra.run.vm01.stdout:(101/136): python3-cachetools-4.2.4-1.el9.noarc 13 MB/s | 32 kB 00:00 2026-03-10T09:54:55.815 INFO:teuthology.orchestra.run.vm01.stdout:(102/136): parquet-libs-9.0.0-15.el9.x86_64.rpm 32 MB/s | 838 kB 00:00 2026-03-10T09:54:55.816 INFO:teuthology.orchestra.run.vm01.stdout:(103/136): python3-certifi-2023.05.07-4.el9.noa 4.0 MB/s | 14 kB 00:00 2026-03-10T09:54:55.819 INFO:teuthology.orchestra.run.vm01.stdout:(104/136): python3-cheroot-10.0.1-4.el9.noarch. 46 MB/s | 173 kB 00:00 2026-03-10T09:54:55.826 INFO:teuthology.orchestra.run.vm01.stdout:(105/136): python3-google-auth-2.45.0-1.el9.noa 43 MB/s | 254 kB 00:00 2026-03-10T09:54:55.828 INFO:teuthology.orchestra.run.vm01.stdout:(106/136): python3-cherrypy-18.6.1-2.el9.noarch 31 MB/s | 358 kB 00:00 2026-03-10T09:54:55.836 INFO:teuthology.orchestra.run.vm01.stdout:(107/136): python3-grpcio-tools-1.46.7-10.el9.x 18 MB/s | 144 kB 00:00 2026-03-10T09:54:55.840 INFO:teuthology.orchestra.run.vm01.stdout:(108/136): python3-jaraco-8.2.1-3.el9.noarch.rp 2.9 MB/s | 11 kB 00:00 2026-03-10T09:54:55.846 INFO:teuthology.orchestra.run.vm01.stdout:(109/136): python3-jaraco-classes-3.2.1-5.el9.n 3.5 MB/s | 18 kB 00:00 2026-03-10T09:54:55.850 INFO:teuthology.orchestra.run.vm01.stdout:(110/136): python3-jaraco-collections-3.0.0-8.e 6.0 MB/s | 23 kB 00:00 2026-03-10T09:54:55.853 INFO:teuthology.orchestra.run.vm01.stdout:(111/136): python3-jaraco-context-6.0.1-3.el9.n 6.9 MB/s | 20 kB 00:00 2026-03-10T09:54:55.859 INFO:teuthology.orchestra.run.vm01.stdout:(112/136): python3-grpcio-1.46.7-10.el9.x86_64. 61 MB/s | 2.0 MB 00:00 2026-03-10T09:54:55.860 INFO:teuthology.orchestra.run.vm01.stdout:(113/136): python3-jaraco-functools-3.5.0-2.el9 2.7 MB/s | 19 kB 00:00 2026-03-10T09:54:55.862 INFO:teuthology.orchestra.run.vm01.stdout:(114/136): python3-jaraco-text-4.0.0-2.el9.noar 13 MB/s | 26 kB 00:00 2026-03-10T09:54:55.866 INFO:teuthology.orchestra.run.vm01.stdout:(115/136): python3-logutils-0.3.5-21.el9.noarch 13 MB/s | 46 kB 00:00 2026-03-10T09:54:55.871 INFO:teuthology.orchestra.run.vm01.stdout:(116/136): python3-more-itertools-8.12.0-2.el9. 16 MB/s | 79 kB 00:00 2026-03-10T09:54:55.878 INFO:teuthology.orchestra.run.vm01.stdout:(117/136): python3-kubernetes-26.1.0-3.el9.noar 60 MB/s | 1.0 MB 00:00 2026-03-10T09:54:55.879 INFO:teuthology.orchestra.run.vm01.stdout:(118/136): python3-natsort-7.1.1-5.el9.noarch.r 7.2 MB/s | 58 kB 00:00 2026-03-10T09:54:55.883 INFO:teuthology.orchestra.run.vm01.stdout:(119/136): python3-portend-3.1.0-2.el9.noarch.r 4.6 MB/s | 16 kB 00:00 2026-03-10T09:54:55.884 INFO:teuthology.orchestra.run.vm01.stdout:(120/136): python3-pecan-1.4.2-3.el9.noarch.rpm 43 MB/s | 272 kB 00:00 2026-03-10T09:54:55.886 INFO:teuthology.orchestra.run.vm01.stdout:(121/136): python3-pyOpenSSL-21.0.0-1.el9.noarc 29 MB/s | 90 kB 00:00 2026-03-10T09:54:55.887 INFO:teuthology.orchestra.run.vm01.stdout:(122/136): python3-repoze-lru-0.7-16.el9.noarch 14 MB/s | 31 kB 00:00 2026-03-10T09:54:55.890 INFO:teuthology.orchestra.run.vm01.stdout:(123/136): python3-rsa-4.9-2.el9.noarch.rpm 20 MB/s | 59 kB 00:00 2026-03-10T09:54:55.891 INFO:teuthology.orchestra.run.vm01.stdout:(124/136): python3-routes-2.5.1-5.el9.noarch.rp 38 MB/s | 188 kB 00:00 2026-03-10T09:54:55.893 INFO:teuthology.orchestra.run.vm01.stdout:(125/136): python3-tempora-5.0.0-2.el9.noarch.r 11 MB/s | 36 kB 00:00 2026-03-10T09:54:55.894 INFO:teuthology.orchestra.run.vm01.stdout:(126/136): python3-typing-extensions-4.15.0-1.e 28 MB/s | 86 kB 00:00 2026-03-10T09:54:55.898 INFO:teuthology.orchestra.run.vm01.stdout:(127/136): python3-webob-1.8.8-2.el9.noarch.rpm 51 MB/s | 230 kB 00:00 2026-03-10T09:54:55.899 INFO:teuthology.orchestra.run.vm01.stdout:(128/136): python3-websocket-client-1.2.3-2.el9 21 MB/s | 90 kB 00:00 2026-03-10T09:54:55.902 INFO:teuthology.orchestra.run.vm01.stdout:(129/136): python3-xmltodict-0.12.0-15.el9.noar 6.4 MB/s | 22 kB 00:00 2026-03-10T09:54:55.905 INFO:teuthology.orchestra.run.vm01.stdout:(130/136): python3-werkzeug-2.0.3-3.el9.1.noarc 57 MB/s | 427 kB 00:00 2026-03-10T09:54:55.906 INFO:teuthology.orchestra.run.vm01.stdout:(131/136): python3-zc-lockfile-2.0-10.el9.noarc 5.8 MB/s | 20 kB 00:00 2026-03-10T09:54:55.910 INFO:teuthology.orchestra.run.vm01.stdout:(132/136): re2-20211101-20.el9.x86_64.rpm 47 MB/s | 191 kB 00:00 2026-03-10T09:54:55.931 INFO:teuthology.orchestra.run.vm01.stdout:(133/136): thrift-0.15.0-4.el9.x86_64.rpm 64 MB/s | 1.6 MB 00:00 2026-03-10T09:54:55.959 INFO:teuthology.orchestra.run.vm08.stdout:(55/136): libpmemobj-1.12.1-1.el9.x86_64.rpm 881 kB/s | 160 kB 00:00 2026-03-10T09:54:56.026 INFO:teuthology.orchestra.run.vm08.stdout:(56/136): libnbd-1.20.3-4.el9.x86_64.rpm 652 kB/s | 164 kB 00:00 2026-03-10T09:54:56.044 INFO:teuthology.orchestra.run.vm08.stdout:(57/136): librabbitmq-0.11.0-7.el9.x86_64.rpm 534 kB/s | 45 kB 00:00 2026-03-10T09:54:56.343 INFO:teuthology.orchestra.run.vm08.stdout:(58/136): libstoragemgmt-1.10.1-1.el9.x86_64.rp 825 kB/s | 246 kB 00:00 2026-03-10T09:54:56.395 INFO:teuthology.orchestra.run.vm08.stdout:(59/136): flexiblas-netlib-3.0.4-9.el9.x86_64.r 3.4 MB/s | 3.0 MB 00:00 2026-03-10T09:54:56.425 INFO:teuthology.orchestra.run.vm08.stdout:(60/136): libxslt-1.1.34-12.el9.x86_64.rpm 2.8 MB/s | 233 kB 00:00 2026-03-10T09:54:56.435 INFO:teuthology.orchestra.run.vm08.stdout:(61/136): librdkafka-1.6.1-102.el9.x86_64.rpm 1.6 MB/s | 662 kB 00:00 2026-03-10T09:54:56.511 INFO:teuthology.orchestra.run.vm08.stdout:(62/136): lua-5.4.4-4.el9.x86_64.rpm 2.1 MB/s | 188 kB 00:00 2026-03-10T09:54:56.521 INFO:teuthology.orchestra.run.vm08.stdout:(63/136): lttng-ust-2.12.0-6.el9.x86_64.rpm 2.3 MB/s | 292 kB 00:00 2026-03-10T09:54:56.564 INFO:teuthology.orchestra.run.vm08.stdout:(64/136): openblas-0.3.29-1.el9.x86_64.rpm 327 kB/s | 42 kB 00:00 2026-03-10T09:54:56.907 INFO:teuthology.orchestra.run.vm08.stdout:(65/136): openblas-openmp-0.3.29-1.el9.x86_64.r 13 MB/s | 5.3 MB 00:00 2026-03-10T09:54:57.028 INFO:teuthology.orchestra.run.vm08.stdout:(66/136): python3-devel-3.9.25-3.el9.x86_64.rpm 2.0 MB/s | 244 kB 00:00 2026-03-10T09:54:57.088 INFO:teuthology.orchestra.run.vm08.stdout:(67/136): python3-babel-2.9.1-2.el9.noarch.rpm 11 MB/s | 6.0 MB 00:00 2026-03-10T09:54:57.162 INFO:teuthology.orchestra.run.vm08.stdout:(68/136): python3-jinja2-2.11.3-8.el9.noarch.rp 1.8 MB/s | 249 kB 00:00 2026-03-10T09:54:57.186 INFO:teuthology.orchestra.run.vm08.stdout:(69/136): python3-jmespath-1.0.1-1.el9.noarch.r 488 kB/s | 48 kB 00:00 2026-03-10T09:54:57.208 INFO:teuthology.orchestra.run.vm08.stdout:(70/136): protobuf-3.14.0-17.el9.x86_64.rpm 1.5 MB/s | 1.0 MB 00:00 2026-03-10T09:54:57.265 INFO:teuthology.orchestra.run.vm08.stdout:(71/136): python3-libstoragemgmt-1.10.1-1.el9.x 1.7 MB/s | 177 kB 00:00 2026-03-10T09:54:57.286 INFO:teuthology.orchestra.run.vm08.stdout:(72/136): python3-mako-1.1.4-6.el9.noarch.rpm 1.7 MB/s | 172 kB 00:00 2026-03-10T09:54:57.290 INFO:teuthology.orchestra.run.vm08.stdout:(73/136): python3-markupsafe-1.1.1-12.el9.x86_6 424 kB/s | 35 kB 00:00 2026-03-10T09:54:57.371 INFO:teuthology.orchestra.run.vm08.stdout:(74/136): python3-packaging-20.9-5.el9.noarch.r 952 kB/s | 77 kB 00:00 2026-03-10T09:54:57.487 INFO:teuthology.orchestra.run.vm08.stdout:(75/136): python3-protobuf-3.14.0-17.el9.noarch 2.3 MB/s | 267 kB 00:00 2026-03-10T09:54:57.519 INFO:teuthology.orchestra.run.vm08.stdout:(76/136): python3-numpy-f2py-1.23.5-2.el9.x86_6 1.9 MB/s | 442 kB 00:00 2026-03-10T09:54:57.614 INFO:teuthology.orchestra.run.vm08.stdout:(77/136): python3-numpy-1.23.5-2.el9.x86_64.rpm 18 MB/s | 6.1 MB 00:00 2026-03-10T09:54:57.616 INFO:teuthology.orchestra.run.vm08.stdout:(78/136): python3-pyasn1-0.4.8-7.el9.noarch.rpm 1.2 MB/s | 157 kB 00:00 2026-03-10T09:54:57.653 INFO:teuthology.orchestra.run.vm08.stdout:(79/136): python3-pyasn1-modules-0.4.8-7.el9.no 2.0 MB/s | 277 kB 00:00 2026-03-10T09:54:57.722 INFO:teuthology.orchestra.run.vm08.stdout:(80/136): python3-requests-oauthlib-1.3.0-12.el 501 kB/s | 54 kB 00:00 2026-03-10T09:54:57.738 INFO:teuthology.orchestra.run.vm08.stdout:(81/136): python3-toml-0.10.2-6.el9.noarch.rpm 489 kB/s | 42 kB 00:00 2026-03-10T09:54:57.768 INFO:teuthology.orchestra.run.vm01.stdout:(134/136): python3-scipy-1.9.3-2.el9.x86_64.rpm 7.0 MB/s | 19 MB 00:02 2026-03-10T09:54:57.768 INFO:teuthology.orchestra.run.vm02.stdout:(134/136): librbd1-19.2.3-678.ge911bdeb.el9.x86 1.6 MB/s | 3.2 MB 00:01 2026-03-10T09:54:57.813 INFO:teuthology.orchestra.run.vm08.stdout:(82/136): qatlib-service-25.08.0-2.el9.x86_64.r 491 kB/s | 37 kB 00:00 2026-03-10T09:54:57.828 INFO:teuthology.orchestra.run.vm08.stdout:(83/136): qatlib-25.08.0-2.el9.x86_64.rpm 2.2 MB/s | 240 kB 00:00 2026-03-10T09:54:57.921 INFO:teuthology.orchestra.run.vm08.stdout:(84/136): qatzip-libs-1.3.1-1.el9.x86_64.rpm 616 kB/s | 66 kB 00:00 2026-03-10T09:54:57.922 INFO:teuthology.orchestra.run.vm01.stdout:(135/136): librbd1-19.2.3-678.ge911bdeb.el9.x86 1.6 MB/s | 3.2 MB 00:01 2026-03-10T09:54:57.931 INFO:teuthology.orchestra.run.vm02.stdout:(135/136): librados2-19.2.3-678.ge911bdeb.el9.x 1.6 MB/s | 3.4 MB 00:02 2026-03-10T09:54:57.960 INFO:teuthology.orchestra.run.vm08.stdout:(85/136): socat-1.7.4.1-8.el9.x86_64.rpm 2.3 MB/s | 303 kB 00:00 2026-03-10T09:54:58.000 INFO:teuthology.orchestra.run.vm08.stdout:(86/136): xmlstarlet-1.6.1-20.el9.x86_64.rpm 806 kB/s | 64 kB 00:00 2026-03-10T09:54:58.065 INFO:teuthology.orchestra.run.vm08.stdout:(87/136): lua-devel-5.4.4-4.el9.x86_64.rpm 212 kB/s | 22 kB 00:00 2026-03-10T09:54:58.080 INFO:teuthology.orchestra.run.vm08.stdout:(88/136): abseil-cpp-20211102.0-4.el9.x86_64.rp 38 MB/s | 551 kB 00:00 2026-03-10T09:54:58.086 INFO:teuthology.orchestra.run.vm08.stdout:(89/136): gperftools-libs-2.9.1-3.el9.x86_64.rp 46 MB/s | 308 kB 00:00 2026-03-10T09:54:58.089 INFO:teuthology.orchestra.run.vm08.stdout:(90/136): grpc-data-1.46.7-10.el9.noarch.rpm 8.4 MB/s | 19 kB 00:00 2026-03-10T09:54:58.159 INFO:teuthology.orchestra.run.vm08.stdout:(91/136): libarrow-9.0.0-15.el9.x86_64.rpm 63 MB/s | 4.4 MB 00:00 2026-03-10T09:54:58.161 INFO:teuthology.orchestra.run.vm08.stdout:(92/136): libarrow-doc-9.0.0-15.el9.noarch.rpm 12 MB/s | 25 kB 00:00 2026-03-10T09:54:58.164 INFO:teuthology.orchestra.run.vm08.stdout:(93/136): liboath-2.6.12-1.el9.x86_64.rpm 20 MB/s | 49 kB 00:00 2026-03-10T09:54:58.167 INFO:teuthology.orchestra.run.vm08.stdout:(94/136): libunwind-1.6.2-1.el9.x86_64.rpm 23 MB/s | 67 kB 00:00 2026-03-10T09:54:58.170 INFO:teuthology.orchestra.run.vm08.stdout:(95/136): luarocks-3.9.2-5.el9.noarch.rpm 44 MB/s | 151 kB 00:00 2026-03-10T09:54:58.183 INFO:teuthology.orchestra.run.vm08.stdout:(96/136): parquet-libs-9.0.0-15.el9.x86_64.rpm 67 MB/s | 838 kB 00:00 2026-03-10T09:54:58.192 INFO:teuthology.orchestra.run.vm08.stdout:(97/136): python3-asyncssh-2.13.2-5.el9.noarch. 62 MB/s | 548 kB 00:00 2026-03-10T09:54:58.194 INFO:teuthology.orchestra.run.vm08.stdout:(98/136): python3-autocommand-2.2.2-8.el9.noarc 12 MB/s | 29 kB 00:00 2026-03-10T09:54:58.197 INFO:teuthology.orchestra.run.vm08.stdout:(99/136): python3-backports-tarfile-1.2.0-1.el9 25 MB/s | 60 kB 00:00 2026-03-10T09:54:58.200 INFO:teuthology.orchestra.run.vm08.stdout:(100/136): python3-bcrypt-3.2.2-1.el9.x86_64.rp 13 MB/s | 43 kB 00:00 2026-03-10T09:54:58.203 INFO:teuthology.orchestra.run.vm08.stdout:(101/136): python3-cachetools-4.2.4-1.el9.noarc 13 MB/s | 32 kB 00:00 2026-03-10T09:54:58.205 INFO:teuthology.orchestra.run.vm08.stdout:(102/136): python3-certifi-2023.05.07-4.el9.noa 6.8 MB/s | 14 kB 00:00 2026-03-10T09:54:58.210 INFO:teuthology.orchestra.run.vm08.stdout:(103/136): python3-cheroot-10.0.1-4.el9.noarch. 41 MB/s | 173 kB 00:00 2026-03-10T09:54:58.217 INFO:teuthology.orchestra.run.vm08.stdout:(104/136): python3-cherrypy-18.6.1-2.el9.noarch 53 MB/s | 358 kB 00:00 2026-03-10T09:54:58.223 INFO:teuthology.orchestra.run.vm08.stdout:(105/136): python3-google-auth-2.45.0-1.el9.noa 40 MB/s | 254 kB 00:00 2026-03-10T09:54:58.224 INFO:teuthology.orchestra.run.vm01.stdout:(136/136): librados2-19.2.3-678.ge911bdeb.el9.x 1.5 MB/s | 3.4 MB 00:02 2026-03-10T09:54:58.228 INFO:teuthology.orchestra.run.vm01.stdout:-------------------------------------------------------------------------------- 2026-03-10T09:54:58.228 INFO:teuthology.orchestra.run.vm01.stdout:Total 14 MB/s | 210 MB 00:15 2026-03-10T09:54:58.260 INFO:teuthology.orchestra.run.vm08.stdout:(106/136): python3-grpcio-1.46.7-10.el9.x86_64. 55 MB/s | 2.0 MB 00:00 2026-03-10T09:54:58.264 INFO:teuthology.orchestra.run.vm08.stdout:(107/136): python3-grpcio-tools-1.46.7-10.el9.x 41 MB/s | 144 kB 00:00 2026-03-10T09:54:58.266 INFO:teuthology.orchestra.run.vm08.stdout:(108/136): python3-jaraco-8.2.1-3.el9.noarch.rp 5.2 MB/s | 11 kB 00:00 2026-03-10T09:54:58.269 INFO:teuthology.orchestra.run.vm08.stdout:(109/136): python3-jaraco-classes-3.2.1-5.el9.n 8.2 MB/s | 18 kB 00:00 2026-03-10T09:54:58.271 INFO:teuthology.orchestra.run.vm08.stdout:(110/136): python3-jaraco-collections-3.0.0-8.e 10 MB/s | 23 kB 00:00 2026-03-10T09:54:58.273 INFO:teuthology.orchestra.run.vm08.stdout:(111/136): python3-jaraco-context-6.0.1-3.el9.n 9.3 MB/s | 20 kB 00:00 2026-03-10T09:54:58.276 INFO:teuthology.orchestra.run.vm08.stdout:(112/136): python3-jaraco-functools-3.5.0-2.el9 9.6 MB/s | 19 kB 00:00 2026-03-10T09:54:58.278 INFO:teuthology.orchestra.run.vm08.stdout:(113/136): python3-jaraco-text-4.0.0-2.el9.noar 11 MB/s | 26 kB 00:00 2026-03-10T09:54:58.295 INFO:teuthology.orchestra.run.vm08.stdout:(114/136): python3-kubernetes-26.1.0-3.el9.noar 63 MB/s | 1.0 MB 00:00 2026-03-10T09:54:58.297 INFO:teuthology.orchestra.run.vm08.stdout:(115/136): python3-logutils-0.3.5-21.el9.noarch 19 MB/s | 46 kB 00:00 2026-03-10T09:54:58.300 INFO:teuthology.orchestra.run.vm08.stdout:(116/136): python3-more-itertools-8.12.0-2.el9. 27 MB/s | 79 kB 00:00 2026-03-10T09:54:58.303 INFO:teuthology.orchestra.run.vm08.stdout:(117/136): python3-natsort-7.1.1-5.el9.noarch.r 24 MB/s | 58 kB 00:00 2026-03-10T09:54:58.308 INFO:teuthology.orchestra.run.vm08.stdout:(118/136): python3-pecan-1.4.2-3.el9.noarch.rpm 53 MB/s | 272 kB 00:00 2026-03-10T09:54:58.313 INFO:teuthology.orchestra.run.vm08.stdout:(119/136): python3-portend-3.1.0-2.el9.noarch.r 3.7 MB/s | 16 kB 00:00 2026-03-10T09:54:58.318 INFO:teuthology.orchestra.run.vm08.stdout:(120/136): python3-pyOpenSSL-21.0.0-1.el9.noarc 19 MB/s | 90 kB 00:00 2026-03-10T09:54:58.322 INFO:teuthology.orchestra.run.vm08.stdout:(121/136): protobuf-compiler-3.14.0-17.el9.x86_ 2.6 MB/s | 862 kB 00:00 2026-03-10T09:54:58.323 INFO:teuthology.orchestra.run.vm08.stdout:(122/136): python3-repoze-lru-0.7-16.el9.noarch 5.7 MB/s | 31 kB 00:00 2026-03-10T09:54:58.327 INFO:teuthology.orchestra.run.vm08.stdout:(123/136): python3-rsa-4.9-2.el9.noarch.rpm 17 MB/s | 59 kB 00:00 2026-03-10T09:54:58.329 INFO:teuthology.orchestra.run.vm08.stdout:(124/136): python3-tempora-5.0.0-2.el9.noarch.r 15 MB/s | 36 kB 00:00 2026-03-10T09:54:58.332 INFO:teuthology.orchestra.run.vm08.stdout:(125/136): python3-routes-2.5.1-5.el9.noarch.rp 21 MB/s | 188 kB 00:00 2026-03-10T09:54:58.335 INFO:teuthology.orchestra.run.vm08.stdout:(126/136): python3-typing-extensions-4.15.0-1.e 17 MB/s | 86 kB 00:00 2026-03-10T09:54:58.339 INFO:teuthology.orchestra.run.vm08.stdout:(127/136): python3-webob-1.8.8-2.el9.noarch.rpm 31 MB/s | 230 kB 00:00 2026-03-10T09:54:58.341 INFO:teuthology.orchestra.run.vm08.stdout:(128/136): python3-websocket-client-1.2.3-2.el9 15 MB/s | 90 kB 00:00 2026-03-10T09:54:58.343 INFO:teuthology.orchestra.run.vm08.stdout:(129/136): python3-xmltodict-0.12.0-15.el9.noar 11 MB/s | 22 kB 00:00 2026-03-10T09:54:58.346 INFO:teuthology.orchestra.run.vm08.stdout:(130/136): python3-zc-lockfile-2.0-10.el9.noarc 9.4 MB/s | 20 kB 00:00 2026-03-10T09:54:58.351 INFO:teuthology.orchestra.run.vm08.stdout:(131/136): python3-werkzeug-2.0.3-3.el9.1.noarc 36 MB/s | 427 kB 00:00 2026-03-10T09:54:58.352 INFO:teuthology.orchestra.run.vm08.stdout:(132/136): re2-20211101-20.el9.x86_64.rpm 29 MB/s | 191 kB 00:00 2026-03-10T09:54:58.379 INFO:teuthology.orchestra.run.vm08.stdout:(133/136): thrift-0.15.0-4.el9.x86_64.rpm 58 MB/s | 1.6 MB 00:00 2026-03-10T09:54:58.596 INFO:teuthology.orchestra.run.vm08.stdout:(134/136): python3-scipy-1.9.3-2.el9.x86_64.rpm 20 MB/s | 19 MB 00:00 2026-03-10T09:54:58.948 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction check 2026-03-10T09:54:59.002 INFO:teuthology.orchestra.run.vm01.stdout:Transaction check succeeded. 2026-03-10T09:54:59.002 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction test 2026-03-10T09:54:59.674 INFO:teuthology.orchestra.run.vm08.stdout:(135/136): librados2-19.2.3-678.ge911bdeb.el9.x 2.6 MB/s | 3.4 MB 00:01 2026-03-10T09:54:59.852 INFO:teuthology.orchestra.run.vm01.stdout:Transaction test succeeded. 2026-03-10T09:54:59.852 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction 2026-03-10T09:55:00.025 INFO:teuthology.orchestra.run.vm08.stdout:(136/136): librbd1-19.2.3-678.ge911bdeb.el9.x86 1.9 MB/s | 3.2 MB 00:01 2026-03-10T09:55:00.028 INFO:teuthology.orchestra.run.vm08.stdout:-------------------------------------------------------------------------------- 2026-03-10T09:55:00.028 INFO:teuthology.orchestra.run.vm08.stdout:Total 13 MB/s | 210 MB 00:15 2026-03-10T09:55:00.493 INFO:teuthology.orchestra.run.vm02.stdout:(136/136): ceph-test-19.2.3-678.ge911bdeb.el9.x 4.1 MB/s | 50 MB 00:12 2026-03-10T09:55:00.498 INFO:teuthology.orchestra.run.vm02.stdout:-------------------------------------------------------------------------------- 2026-03-10T09:55:00.498 INFO:teuthology.orchestra.run.vm02.stdout:Total 13 MB/s | 210 MB 00:16 2026-03-10T09:55:00.661 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction check 2026-03-10T09:55:00.712 INFO:teuthology.orchestra.run.vm08.stdout:Transaction check succeeded. 2026-03-10T09:55:00.712 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction test 2026-03-10T09:55:00.792 INFO:teuthology.orchestra.run.vm01.stdout: Preparing : 1/1 2026-03-10T09:55:00.806 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-more-itertools-8.12.0-2.el9.noarch 1/138 2026-03-10T09:55:00.819 INFO:teuthology.orchestra.run.vm01.stdout: Installing : thrift-0.15.0-4.el9.x86_64 2/138 2026-03-10T09:55:01.003 INFO:teuthology.orchestra.run.vm01.stdout: Installing : lttng-ust-2.12.0-6.el9.x86_64 3/138 2026-03-10T09:55:01.008 INFO:teuthology.orchestra.run.vm01.stdout: Upgrading : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-10T09:55:01.067 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-10T09:55:01.069 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/138 2026-03-10T09:55:01.101 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/138 2026-03-10T09:55:01.112 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 6/138 2026-03-10T09:55:01.116 INFO:teuthology.orchestra.run.vm01.stdout: Installing : librdkafka-1.6.1-102.el9.x86_64 7/138 2026-03-10T09:55:01.119 INFO:teuthology.orchestra.run.vm01.stdout: Installing : librabbitmq-0.11.0-7.el9.x86_64 8/138 2026-03-10T09:55:01.124 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-jaraco-8.2.1-3.el9.noarch 9/138 2026-03-10T09:55:01.131 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction check 2026-03-10T09:55:01.134 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libnbd-1.20.3-4.el9.x86_64 10/138 2026-03-10T09:55:01.136 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-10T09:55:01.175 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-10T09:55:01.176 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 12/138 2026-03-10T09:55:01.187 INFO:teuthology.orchestra.run.vm02.stdout:Transaction check succeeded. 2026-03-10T09:55:01.187 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction test 2026-03-10T09:55:01.195 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 12/138 2026-03-10T09:55:01.233 INFO:teuthology.orchestra.run.vm01.stdout: Installing : re2-1:20211101-20.el9.x86_64 13/138 2026-03-10T09:55:01.274 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libarrow-9.0.0-15.el9.x86_64 14/138 2026-03-10T09:55:01.280 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-werkzeug-2.0.3-3.el9.1.noarch 15/138 2026-03-10T09:55:01.307 INFO:teuthology.orchestra.run.vm01.stdout: Installing : liboath-2.6.12-1.el9.x86_64 16/138 2026-03-10T09:55:01.323 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-pyasn1-0.4.8-7.el9.noarch 17/138 2026-03-10T09:55:01.333 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-packaging-20.9-5.el9.noarch 18/138 2026-03-10T09:55:01.345 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-markupsafe-1.1.1-12.el9.x86_64 19/138 2026-03-10T09:55:01.353 INFO:teuthology.orchestra.run.vm01.stdout: Installing : protobuf-3.14.0-17.el9.x86_64 20/138 2026-03-10T09:55:01.358 INFO:teuthology.orchestra.run.vm01.stdout: Installing : lua-5.4.4-4.el9.x86_64 21/138 2026-03-10T09:55:01.365 INFO:teuthology.orchestra.run.vm01.stdout: Installing : flexiblas-3.0.4-9.el9.x86_64 22/138 2026-03-10T09:55:01.397 INFO:teuthology.orchestra.run.vm01.stdout: Installing : unzip-6.0-59.el9.x86_64 23/138 2026-03-10T09:55:01.416 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-urllib3-1.26.5-7.el9.noarch 24/138 2026-03-10T09:55:01.422 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-requests-2.25.1-10.el9.noarch 25/138 2026-03-10T09:55:01.432 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libquadmath-11.5.0-14.el9.x86_64 26/138 2026-03-10T09:55:01.435 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libgfortran-11.5.0-14.el9.x86_64 27/138 2026-03-10T09:55:01.470 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ledmon-libs-1.1.0-3.el9.x86_64 28/138 2026-03-10T09:55:01.479 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 29/138 2026-03-10T09:55:01.492 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 30/138 2026-03-10T09:55:01.510 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 31/138 2026-03-10T09:55:01.520 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-requests-oauthlib-1.3.0-12.el9.noarch 32/138 2026-03-10T09:55:01.555 INFO:teuthology.orchestra.run.vm01.stdout: Installing : zip-3.0-35.el9.x86_64 33/138 2026-03-10T09:55:01.564 INFO:teuthology.orchestra.run.vm01.stdout: Installing : luarocks-3.9.2-5.el9.noarch 34/138 2026-03-10T09:55:01.575 INFO:teuthology.orchestra.run.vm01.stdout: Installing : lua-devel-5.4.4-4.el9.x86_64 35/138 2026-03-10T09:55:01.596 INFO:teuthology.orchestra.run.vm08.stdout:Transaction test succeeded. 2026-03-10T09:55:01.596 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction 2026-03-10T09:55:01.610 INFO:teuthology.orchestra.run.vm01.stdout: Installing : protobuf-compiler-3.14.0-17.el9.x86_64 36/138 2026-03-10T09:55:01.678 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-mako-1.1.4-6.el9.noarch 37/138 2026-03-10T09:55:01.698 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-pyasn1-modules-0.4.8-7.el9.noarch 38/138 2026-03-10T09:55:01.710 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-rsa-4.9-2.el9.noarch 39/138 2026-03-10T09:55:01.723 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-jaraco-classes-3.2.1-5.el9.noarch 40/138 2026-03-10T09:55:01.731 INFO:teuthology.orchestra.run.vm01.stdout: Installing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 41/138 2026-03-10T09:55:01.736 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-zc-lockfile-2.0-10.el9.noarch 42/138 2026-03-10T09:55:01.759 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-xmltodict-0.12.0-15.el9.noarch 43/138 2026-03-10T09:55:01.792 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-websocket-client-1.2.3-2.el9.noarch 44/138 2026-03-10T09:55:01.802 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-webob-1.8.8-2.el9.noarch 45/138 2026-03-10T09:55:01.811 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-typing-extensions-4.15.0-1.el9.noarch 46/138 2026-03-10T09:55:01.829 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-repoze-lru-0.7-16.el9.noarch 47/138 2026-03-10T09:55:01.843 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-routes-2.5.1-5.el9.noarch 48/138 2026-03-10T09:55:01.857 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-natsort-7.1.1-5.el9.noarch 49/138 2026-03-10T09:55:01.933 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-logutils-0.3.5-21.el9.noarch 50/138 2026-03-10T09:55:01.944 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-pecan-1.4.2-3.el9.noarch 51/138 2026-03-10T09:55:01.955 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-certifi-2023.05.07-4.el9.noarch 52/138 2026-03-10T09:55:02.011 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-cachetools-4.2.4-1.el9.noarch 53/138 2026-03-10T09:55:02.157 INFO:teuthology.orchestra.run.vm02.stdout:Transaction test succeeded. 2026-03-10T09:55:02.157 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction 2026-03-10T09:55:02.452 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-google-auth-1:2.45.0-1.el9.noarch 54/138 2026-03-10T09:55:02.473 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-kubernetes-1:26.1.0-3.el9.noarch 55/138 2026-03-10T09:55:02.481 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-backports-tarfile-1.2.0-1.el9.noarch 56/138 2026-03-10T09:55:02.492 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-jaraco-context-6.0.1-3.el9.noarch 57/138 2026-03-10T09:55:02.499 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-autocommand-2.2.2-8.el9.noarch 58/138 2026-03-10T09:55:02.509 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libunwind-1.6.2-1.el9.x86_64 59/138 2026-03-10T09:55:02.515 INFO:teuthology.orchestra.run.vm01.stdout: Installing : gperftools-libs-2.9.1-3.el9.x86_64 60/138 2026-03-10T09:55:02.517 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libarrow-doc-9.0.0-15.el9.noarch 61/138 2026-03-10T09:55:02.555 INFO:teuthology.orchestra.run.vm01.stdout: Installing : grpc-data-1.46.7-10.el9.noarch 62/138 2026-03-10T09:55:02.613 INFO:teuthology.orchestra.run.vm08.stdout: Preparing : 1/1 2026-03-10T09:55:02.619 INFO:teuthology.orchestra.run.vm01.stdout: Installing : abseil-cpp-20211102.0-4.el9.x86_64 63/138 2026-03-10T09:55:02.638 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-grpcio-1.46.7-10.el9.x86_64 64/138 2026-03-10T09:55:02.642 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-more-itertools-8.12.0-2.el9.noarch 1/138 2026-03-10T09:55:02.648 INFO:teuthology.orchestra.run.vm01.stdout: Installing : socat-1.7.4.1-8.el9.x86_64 65/138 2026-03-10T09:55:02.657 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-toml-0.10.2-6.el9.noarch 66/138 2026-03-10T09:55:02.658 INFO:teuthology.orchestra.run.vm08.stdout: Installing : thrift-0.15.0-4.el9.x86_64 2/138 2026-03-10T09:55:02.669 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-jaraco-functools-3.5.0-2.el9.noarch 67/138 2026-03-10T09:55:02.677 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-jaraco-text-4.0.0-2.el9.noarch 68/138 2026-03-10T09:55:02.688 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-jaraco-collections-3.0.0-8.el9.noarch 69/138 2026-03-10T09:55:02.695 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-tempora-5.0.0-2.el9.noarch 70/138 2026-03-10T09:55:02.737 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-portend-3.1.0-2.el9.noarch 71/138 2026-03-10T09:55:02.755 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-protobuf-3.14.0-17.el9.noarch 72/138 2026-03-10T09:55:02.806 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-grpcio-tools-1.46.7-10.el9.x86_64 73/138 2026-03-10T09:55:02.849 INFO:teuthology.orchestra.run.vm08.stdout: Installing : lttng-ust-2.12.0-6.el9.x86_64 3/138 2026-03-10T09:55:02.853 INFO:teuthology.orchestra.run.vm08.stdout: Upgrading : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-10T09:55:02.918 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-10T09:55:02.957 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/138 2026-03-10T09:55:02.996 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/138 2026-03-10T09:55:03.007 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 6/138 2026-03-10T09:55:03.012 INFO:teuthology.orchestra.run.vm08.stdout: Installing : librdkafka-1.6.1-102.el9.x86_64 7/138 2026-03-10T09:55:03.015 INFO:teuthology.orchestra.run.vm08.stdout: Installing : librabbitmq-0.11.0-7.el9.x86_64 8/138 2026-03-10T09:55:03.022 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-jaraco-8.2.1-3.el9.noarch 9/138 2026-03-10T09:55:03.034 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libnbd-1.20.3-4.el9.x86_64 10/138 2026-03-10T09:55:03.035 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-10T09:55:03.075 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-10T09:55:03.077 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 12/138 2026-03-10T09:55:03.096 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 12/138 2026-03-10T09:55:03.114 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-devel-3.9.25-3.el9.x86_64 74/138 2026-03-10T09:55:03.136 INFO:teuthology.orchestra.run.vm08.stdout: Installing : re2-1:20211101-20.el9.x86_64 13/138 2026-03-10T09:55:03.136 INFO:teuthology.orchestra.run.vm02.stdout: Preparing : 1/1 2026-03-10T09:55:03.149 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-babel-2.9.1-2.el9.noarch 75/138 2026-03-10T09:55:03.164 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-jinja2-2.11.3-8.el9.noarch 76/138 2026-03-10T09:55:03.178 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-more-itertools-8.12.0-2.el9.noarch 1/138 2026-03-10T09:55:03.194 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libarrow-9.0.0-15.el9.x86_64 14/138 2026-03-10T09:55:03.197 INFO:teuthology.orchestra.run.vm02.stdout: Installing : thrift-0.15.0-4.el9.x86_64 2/138 2026-03-10T09:55:03.201 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-werkzeug-2.0.3-3.el9.1.noarch 15/138 2026-03-10T09:55:03.232 INFO:teuthology.orchestra.run.vm08.stdout: Installing : liboath-2.6.12-1.el9.x86_64 16/138 2026-03-10T09:55:03.237 INFO:teuthology.orchestra.run.vm01.stdout: Installing : openblas-0.3.29-1.el9.x86_64 77/138 2026-03-10T09:55:03.247 INFO:teuthology.orchestra.run.vm01.stdout: Installing : openblas-openmp-0.3.29-1.el9.x86_64 78/138 2026-03-10T09:55:03.261 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-pyasn1-0.4.8-7.el9.noarch 17/138 2026-03-10T09:55:03.273 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-packaging-20.9-5.el9.noarch 18/138 2026-03-10T09:55:03.275 INFO:teuthology.orchestra.run.vm01.stdout: Installing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 79/138 2026-03-10T09:55:03.286 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-markupsafe-1.1.1-12.el9.x86_64 19/138 2026-03-10T09:55:03.295 INFO:teuthology.orchestra.run.vm08.stdout: Installing : protobuf-3.14.0-17.el9.x86_64 20/138 2026-03-10T09:55:03.301 INFO:teuthology.orchestra.run.vm08.stdout: Installing : lua-5.4.4-4.el9.x86_64 21/138 2026-03-10T09:55:03.308 INFO:teuthology.orchestra.run.vm08.stdout: Installing : flexiblas-3.0.4-9.el9.x86_64 22/138 2026-03-10T09:55:03.390 INFO:teuthology.orchestra.run.vm02.stdout: Installing : lttng-ust-2.12.0-6.el9.x86_64 3/138 2026-03-10T09:55:03.394 INFO:teuthology.orchestra.run.vm02.stdout: Upgrading : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-10T09:55:03.416 INFO:teuthology.orchestra.run.vm08.stdout: Installing : unzip-6.0-59.el9.x86_64 23/138 2026-03-10T09:55:03.439 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-urllib3-1.26.5-7.el9.noarch 24/138 2026-03-10T09:55:03.446 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-requests-2.25.1-10.el9.noarch 25/138 2026-03-10T09:55:03.456 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libquadmath-11.5.0-14.el9.x86_64 26/138 2026-03-10T09:55:03.456 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-10T09:55:03.458 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/138 2026-03-10T09:55:03.460 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libgfortran-11.5.0-14.el9.x86_64 27/138 2026-03-10T09:55:03.494 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/138 2026-03-10T09:55:03.497 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ledmon-libs-1.1.0-3.el9.x86_64 28/138 2026-03-10T09:55:03.505 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 29/138 2026-03-10T09:55:03.510 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 6/138 2026-03-10T09:55:03.515 INFO:teuthology.orchestra.run.vm02.stdout: Installing : librdkafka-1.6.1-102.el9.x86_64 7/138 2026-03-10T09:55:03.518 INFO:teuthology.orchestra.run.vm02.stdout: Installing : librabbitmq-0.11.0-7.el9.x86_64 8/138 2026-03-10T09:55:03.520 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 30/138 2026-03-10T09:55:03.523 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-jaraco-8.2.1-3.el9.noarch 9/138 2026-03-10T09:55:03.534 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libnbd-1.20.3-4.el9.x86_64 10/138 2026-03-10T09:55:03.535 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-10T09:55:03.537 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 31/138 2026-03-10T09:55:03.547 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-requests-oauthlib-1.3.0-12.el9.noarch 32/138 2026-03-10T09:55:03.572 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-10T09:55:03.574 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 12/138 2026-03-10T09:55:03.580 INFO:teuthology.orchestra.run.vm08.stdout: Installing : zip-3.0-35.el9.x86_64 33/138 2026-03-10T09:55:03.587 INFO:teuthology.orchestra.run.vm08.stdout: Installing : luarocks-3.9.2-5.el9.noarch 34/138 2026-03-10T09:55:03.588 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 12/138 2026-03-10T09:55:03.598 INFO:teuthology.orchestra.run.vm08.stdout: Installing : lua-devel-5.4.4-4.el9.x86_64 35/138 2026-03-10T09:55:03.622 INFO:teuthology.orchestra.run.vm02.stdout: Installing : re2-1:20211101-20.el9.x86_64 13/138 2026-03-10T09:55:03.631 INFO:teuthology.orchestra.run.vm08.stdout: Installing : protobuf-compiler-3.14.0-17.el9.x86_64 36/138 2026-03-10T09:55:03.664 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libarrow-9.0.0-15.el9.x86_64 14/138 2026-03-10T09:55:03.669 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-werkzeug-2.0.3-3.el9.1.noarch 15/138 2026-03-10T09:55:03.698 INFO:teuthology.orchestra.run.vm02.stdout: Installing : liboath-2.6.12-1.el9.x86_64 16/138 2026-03-10T09:55:03.707 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-mako-1.1.4-6.el9.noarch 37/138 2026-03-10T09:55:03.713 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-pyasn1-0.4.8-7.el9.noarch 17/138 2026-03-10T09:55:03.721 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-packaging-20.9-5.el9.noarch 18/138 2026-03-10T09:55:03.723 INFO:teuthology.orchestra.run.vm01.stdout: Installing : flexiblas-netlib-3.0.4-9.el9.x86_64 80/138 2026-03-10T09:55:03.727 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-pyasn1-modules-0.4.8-7.el9.noarch 38/138 2026-03-10T09:55:03.732 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-markupsafe-1.1.1-12.el9.x86_64 19/138 2026-03-10T09:55:03.736 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-rsa-4.9-2.el9.noarch 39/138 2026-03-10T09:55:03.742 INFO:teuthology.orchestra.run.vm02.stdout: Installing : protobuf-3.14.0-17.el9.x86_64 20/138 2026-03-10T09:55:03.745 INFO:teuthology.orchestra.run.vm02.stdout: Installing : lua-5.4.4-4.el9.x86_64 21/138 2026-03-10T09:55:03.747 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-jaraco-classes-3.2.1-5.el9.noarch 40/138 2026-03-10T09:55:03.751 INFO:teuthology.orchestra.run.vm02.stdout: Installing : flexiblas-3.0.4-9.el9.x86_64 22/138 2026-03-10T09:55:03.754 INFO:teuthology.orchestra.run.vm08.stdout: Installing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 41/138 2026-03-10T09:55:03.759 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-zc-lockfile-2.0-10.el9.noarch 42/138 2026-03-10T09:55:03.780 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-xmltodict-0.12.0-15.el9.noarch 43/138 2026-03-10T09:55:03.784 INFO:teuthology.orchestra.run.vm02.stdout: Installing : unzip-6.0-59.el9.x86_64 23/138 2026-03-10T09:55:03.801 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-urllib3-1.26.5-7.el9.noarch 24/138 2026-03-10T09:55:03.806 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-requests-2.25.1-10.el9.noarch 25/138 2026-03-10T09:55:03.811 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-websocket-client-1.2.3-2.el9.noarch 44/138 2026-03-10T09:55:03.813 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libquadmath-11.5.0-14.el9.x86_64 26/138 2026-03-10T09:55:03.816 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libgfortran-11.5.0-14.el9.x86_64 27/138 2026-03-10T09:55:03.819 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-webob-1.8.8-2.el9.noarch 45/138 2026-03-10T09:55:03.826 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-typing-extensions-4.15.0-1.el9.noarch 46/138 2026-03-10T09:55:03.827 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-numpy-1:1.23.5-2.el9.x86_64 81/138 2026-03-10T09:55:03.843 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-repoze-lru-0.7-16.el9.noarch 47/138 2026-03-10T09:55:03.848 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ledmon-libs-1.1.0-3.el9.x86_64 28/138 2026-03-10T09:55:03.856 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 29/138 2026-03-10T09:55:03.858 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-routes-2.5.1-5.el9.noarch 48/138 2026-03-10T09:55:03.868 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 30/138 2026-03-10T09:55:03.874 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-natsort-7.1.1-5.el9.noarch 49/138 2026-03-10T09:55:03.882 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 31/138 2026-03-10T09:55:03.891 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-requests-oauthlib-1.3.0-12.el9.noarch 32/138 2026-03-10T09:55:03.923 INFO:teuthology.orchestra.run.vm02.stdout: Installing : zip-3.0-35.el9.x86_64 33/138 2026-03-10T09:55:03.929 INFO:teuthology.orchestra.run.vm02.stdout: Installing : luarocks-3.9.2-5.el9.noarch 34/138 2026-03-10T09:55:03.938 INFO:teuthology.orchestra.run.vm02.stdout: Installing : lua-devel-5.4.4-4.el9.x86_64 35/138 2026-03-10T09:55:03.950 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-logutils-0.3.5-21.el9.noarch 50/138 2026-03-10T09:55:03.961 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-pecan-1.4.2-3.el9.noarch 51/138 2026-03-10T09:55:03.973 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-certifi-2023.05.07-4.el9.noarch 52/138 2026-03-10T09:55:03.975 INFO:teuthology.orchestra.run.vm02.stdout: Installing : protobuf-compiler-3.14.0-17.el9.x86_64 36/138 2026-03-10T09:55:04.037 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-cachetools-4.2.4-1.el9.noarch 53/138 2026-03-10T09:55:04.060 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-mako-1.1.4-6.el9.noarch 37/138 2026-03-10T09:55:04.079 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-pyasn1-modules-0.4.8-7.el9.noarch 38/138 2026-03-10T09:55:04.087 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-rsa-4.9-2.el9.noarch 39/138 2026-03-10T09:55:04.097 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-jaraco-classes-3.2.1-5.el9.noarch 40/138 2026-03-10T09:55:04.105 INFO:teuthology.orchestra.run.vm02.stdout: Installing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 41/138 2026-03-10T09:55:04.110 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-zc-lockfile-2.0-10.el9.noarch 42/138 2026-03-10T09:55:04.130 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-xmltodict-0.12.0-15.el9.noarch 43/138 2026-03-10T09:55:04.160 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-websocket-client-1.2.3-2.el9.noarch 44/138 2026-03-10T09:55:04.167 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-webob-1.8.8-2.el9.noarch 45/138 2026-03-10T09:55:04.173 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-typing-extensions-4.15.0-1.el9.noarch 46/138 2026-03-10T09:55:04.193 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-repoze-lru-0.7-16.el9.noarch 47/138 2026-03-10T09:55:04.209 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-routes-2.5.1-5.el9.noarch 48/138 2026-03-10T09:55:04.224 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-natsort-7.1.1-5.el9.noarch 49/138 2026-03-10T09:55:04.299 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-logutils-0.3.5-21.el9.noarch 50/138 2026-03-10T09:55:04.314 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-pecan-1.4.2-3.el9.noarch 51/138 2026-03-10T09:55:04.334 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-certifi-2023.05.07-4.el9.noarch 52/138 2026-03-10T09:55:04.390 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-cachetools-4.2.4-1.el9.noarch 53/138 2026-03-10T09:55:04.466 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-google-auth-1:2.45.0-1.el9.noarch 54/138 2026-03-10T09:55:04.485 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-kubernetes-1:26.1.0-3.el9.noarch 55/138 2026-03-10T09:55:04.492 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-backports-tarfile-1.2.0-1.el9.noarch 56/138 2026-03-10T09:55:04.501 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-jaraco-context-6.0.1-3.el9.noarch 57/138 2026-03-10T09:55:04.508 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-autocommand-2.2.2-8.el9.noarch 58/138 2026-03-10T09:55:04.517 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libunwind-1.6.2-1.el9.x86_64 59/138 2026-03-10T09:55:04.522 INFO:teuthology.orchestra.run.vm08.stdout: Installing : gperftools-libs-2.9.1-3.el9.x86_64 60/138 2026-03-10T09:55:04.524 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libarrow-doc-9.0.0-15.el9.noarch 61/138 2026-03-10T09:55:04.559 INFO:teuthology.orchestra.run.vm08.stdout: Installing : grpc-data-1.46.7-10.el9.noarch 62/138 2026-03-10T09:55:04.617 INFO:teuthology.orchestra.run.vm08.stdout: Installing : abseil-cpp-20211102.0-4.el9.x86_64 63/138 2026-03-10T09:55:04.633 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-grpcio-1.46.7-10.el9.x86_64 64/138 2026-03-10T09:55:04.644 INFO:teuthology.orchestra.run.vm08.stdout: Installing : socat-1.7.4.1-8.el9.x86_64 65/138 2026-03-10T09:55:04.651 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-toml-0.10.2-6.el9.noarch 66/138 2026-03-10T09:55:04.663 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-jaraco-functools-3.5.0-2.el9.noarch 67/138 2026-03-10T09:55:04.670 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-jaraco-text-4.0.0-2.el9.noarch 68/138 2026-03-10T09:55:04.681 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-jaraco-collections-3.0.0-8.el9.noarch 69/138 2026-03-10T09:55:04.689 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-tempora-5.0.0-2.el9.noarch 70/138 2026-03-10T09:55:04.742 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-portend-3.1.0-2.el9.noarch 71/138 2026-03-10T09:55:04.762 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-protobuf-3.14.0-17.el9.noarch 72/138 2026-03-10T09:55:04.771 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 82/138 2026-03-10T09:55:04.803 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-scipy-1.9.3-2.el9.x86_64 83/138 2026-03-10T09:55:04.810 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libxslt-1.1.34-12.el9.x86_64 84/138 2026-03-10T09:55:04.815 INFO:teuthology.orchestra.run.vm01.stdout: Installing : xmlstarlet-1.6.1-20.el9.x86_64 85/138 2026-03-10T09:55:04.818 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-grpcio-tools-1.46.7-10.el9.x86_64 73/138 2026-03-10T09:55:04.851 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-google-auth-1:2.45.0-1.el9.noarch 54/138 2026-03-10T09:55:04.870 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-kubernetes-1:26.1.0-3.el9.noarch 55/138 2026-03-10T09:55:04.879 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-backports-tarfile-1.2.0-1.el9.noarch 56/138 2026-03-10T09:55:04.889 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-jaraco-context-6.0.1-3.el9.noarch 57/138 2026-03-10T09:55:04.895 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-autocommand-2.2.2-8.el9.noarch 58/138 2026-03-10T09:55:04.904 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libunwind-1.6.2-1.el9.x86_64 59/138 2026-03-10T09:55:04.910 INFO:teuthology.orchestra.run.vm02.stdout: Installing : gperftools-libs-2.9.1-3.el9.x86_64 60/138 2026-03-10T09:55:04.913 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libarrow-doc-9.0.0-15.el9.noarch 61/138 2026-03-10T09:55:04.949 INFO:teuthology.orchestra.run.vm02.stdout: Installing : grpc-data-1.46.7-10.el9.noarch 62/138 2026-03-10T09:55:04.979 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libpmemobj-1.12.1-1.el9.x86_64 86/138 2026-03-10T09:55:04.983 INFO:teuthology.orchestra.run.vm01.stdout: Upgrading : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 87/138 2026-03-10T09:55:05.008 INFO:teuthology.orchestra.run.vm02.stdout: Installing : abseil-cpp-20211102.0-4.el9.x86_64 63/138 2026-03-10T09:55:05.017 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 87/138 2026-03-10T09:55:05.022 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 88/138 2026-03-10T09:55:05.024 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-grpcio-1.46.7-10.el9.x86_64 64/138 2026-03-10T09:55:05.031 INFO:teuthology.orchestra.run.vm01.stdout: Installing : boost-program-options-1.75.0-13.el9.x86_64 89/138 2026-03-10T09:55:05.033 INFO:teuthology.orchestra.run.vm02.stdout: Installing : socat-1.7.4.1-8.el9.x86_64 65/138 2026-03-10T09:55:05.040 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-toml-0.10.2-6.el9.noarch 66/138 2026-03-10T09:55:05.047 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-jaraco-functools-3.5.0-2.el9.noarch 67/138 2026-03-10T09:55:05.053 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-jaraco-text-4.0.0-2.el9.noarch 68/138 2026-03-10T09:55:05.063 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-jaraco-collections-3.0.0-8.el9.noarch 69/138 2026-03-10T09:55:05.069 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-tempora-5.0.0-2.el9.noarch 70/138 2026-03-10T09:55:05.106 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-portend-3.1.0-2.el9.noarch 71/138 2026-03-10T09:55:05.120 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-protobuf-3.14.0-17.el9.noarch 72/138 2026-03-10T09:55:05.139 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-devel-3.9.25-3.el9.x86_64 74/138 2026-03-10T09:55:05.175 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-grpcio-tools-1.46.7-10.el9.x86_64 73/138 2026-03-10T09:55:05.176 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-babel-2.9.1-2.el9.noarch 75/138 2026-03-10T09:55:05.187 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-jinja2-2.11.3-8.el9.noarch 76/138 2026-03-10T09:55:05.271 INFO:teuthology.orchestra.run.vm08.stdout: Installing : openblas-0.3.29-1.el9.x86_64 77/138 2026-03-10T09:55:05.275 INFO:teuthology.orchestra.run.vm08.stdout: Installing : openblas-openmp-0.3.29-1.el9.x86_64 78/138 2026-03-10T09:55:05.301 INFO:teuthology.orchestra.run.vm01.stdout: Installing : parquet-libs-9.0.0-15.el9.x86_64 90/138 2026-03-10T09:55:05.303 INFO:teuthology.orchestra.run.vm08.stdout: Installing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 79/138 2026-03-10T09:55:05.304 INFO:teuthology.orchestra.run.vm01.stdout: Installing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 91/138 2026-03-10T09:55:05.328 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 91/138 2026-03-10T09:55:05.331 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 92/138 2026-03-10T09:55:05.487 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-devel-3.9.25-3.el9.x86_64 74/138 2026-03-10T09:55:05.521 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-babel-2.9.1-2.el9.noarch 75/138 2026-03-10T09:55:05.529 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-jinja2-2.11.3-8.el9.noarch 76/138 2026-03-10T09:55:05.595 INFO:teuthology.orchestra.run.vm02.stdout: Installing : openblas-0.3.29-1.el9.x86_64 77/138 2026-03-10T09:55:05.598 INFO:teuthology.orchestra.run.vm02.stdout: Installing : openblas-openmp-0.3.29-1.el9.x86_64 78/138 2026-03-10T09:55:05.626 INFO:teuthology.orchestra.run.vm02.stdout: Installing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 79/138 2026-03-10T09:55:05.724 INFO:teuthology.orchestra.run.vm08.stdout: Installing : flexiblas-netlib-3.0.4-9.el9.x86_64 80/138 2026-03-10T09:55:05.825 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-numpy-1:1.23.5-2.el9.x86_64 81/138 2026-03-10T09:55:06.040 INFO:teuthology.orchestra.run.vm02.stdout: Installing : flexiblas-netlib-3.0.4-9.el9.x86_64 80/138 2026-03-10T09:55:06.137 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-numpy-1:1.23.5-2.el9.x86_64 81/138 2026-03-10T09:55:06.484 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-10T09:55:06.556 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-10T09:55:06.581 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-10T09:55:06.600 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-ply-3.11-14.el9.noarch 94/138 2026-03-10T09:55:06.622 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-pycparser-2.20-6.el9.noarch 95/138 2026-03-10T09:55:06.670 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 82/138 2026-03-10T09:55:06.703 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-scipy-1.9.3-2.el9.x86_64 83/138 2026-03-10T09:55:06.710 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libxslt-1.1.34-12.el9.x86_64 84/138 2026-03-10T09:55:06.716 INFO:teuthology.orchestra.run.vm08.stdout: Installing : xmlstarlet-1.6.1-20.el9.x86_64 85/138 2026-03-10T09:55:06.719 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-cffi-1.14.5-5.el9.x86_64 96/138 2026-03-10T09:55:06.737 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-cryptography-36.0.1-5.el9.x86_64 97/138 2026-03-10T09:55:06.775 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-pyOpenSSL-21.0.0-1.el9.noarch 98/138 2026-03-10T09:55:06.817 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-cheroot-10.0.1-4.el9.noarch 99/138 2026-03-10T09:55:06.881 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libpmemobj-1.12.1-1.el9.x86_64 86/138 2026-03-10T09:55:06.882 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-cherrypy-18.6.1-2.el9.noarch 100/138 2026-03-10T09:55:06.884 INFO:teuthology.orchestra.run.vm08.stdout: Upgrading : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 87/138 2026-03-10T09:55:06.897 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-asyncssh-2.13.2-5.el9.noarch 101/138 2026-03-10T09:55:06.907 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-bcrypt-3.2.2-1.el9.x86_64 102/138 2026-03-10T09:55:06.914 INFO:teuthology.orchestra.run.vm01.stdout: Installing : pciutils-3.7.0-7.el9.x86_64 103/138 2026-03-10T09:55:06.918 INFO:teuthology.orchestra.run.vm01.stdout: Installing : qatlib-25.08.0-2.el9.x86_64 104/138 2026-03-10T09:55:06.919 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 87/138 2026-03-10T09:55:06.921 INFO:teuthology.orchestra.run.vm01.stdout: Installing : qatlib-service-25.08.0-2.el9.x86_64 105/138 2026-03-10T09:55:06.924 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 88/138 2026-03-10T09:55:06.932 INFO:teuthology.orchestra.run.vm08.stdout: Installing : boost-program-options-1.75.0-13.el9.x86_64 89/138 2026-03-10T09:55:06.946 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 105/138 2026-03-10T09:55:06.977 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 82/138 2026-03-10T09:55:07.008 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-scipy-1.9.3-2.el9.x86_64 83/138 2026-03-10T09:55:07.015 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libxslt-1.1.34-12.el9.x86_64 84/138 2026-03-10T09:55:07.021 INFO:teuthology.orchestra.run.vm02.stdout: Installing : xmlstarlet-1.6.1-20.el9.x86_64 85/138 2026-03-10T09:55:07.187 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libpmemobj-1.12.1-1.el9.x86_64 86/138 2026-03-10T09:55:07.191 INFO:teuthology.orchestra.run.vm02.stdout: Upgrading : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 87/138 2026-03-10T09:55:07.197 INFO:teuthology.orchestra.run.vm08.stdout: Installing : parquet-libs-9.0.0-15.el9.x86_64 90/138 2026-03-10T09:55:07.201 INFO:teuthology.orchestra.run.vm08.stdout: Installing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 91/138 2026-03-10T09:55:07.227 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 91/138 2026-03-10T09:55:07.227 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 87/138 2026-03-10T09:55:07.230 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 92/138 2026-03-10T09:55:07.232 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 88/138 2026-03-10T09:55:07.242 INFO:teuthology.orchestra.run.vm02.stdout: Installing : boost-program-options-1.75.0-13.el9.x86_64 89/138 2026-03-10T09:55:07.273 INFO:teuthology.orchestra.run.vm01.stdout: Installing : qatzip-libs-1.3.1-1.el9.x86_64 106/138 2026-03-10T09:55:07.280 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 107/138 2026-03-10T09:55:07.323 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 107/138 2026-03-10T09:55:07.323 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /usr/lib/systemd/system/ceph.target. 2026-03-10T09:55:07.323 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /usr/lib/systemd/system/ceph-crash.service. 2026-03-10T09:55:07.323 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:55:07.328 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 108/138 2026-03-10T09:55:07.506 INFO:teuthology.orchestra.run.vm02.stdout: Installing : parquet-libs-9.0.0-15.el9.x86_64 90/138 2026-03-10T09:55:07.509 INFO:teuthology.orchestra.run.vm02.stdout: Installing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 91/138 2026-03-10T09:55:07.530 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 91/138 2026-03-10T09:55:07.533 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 92/138 2026-03-10T09:55:08.369 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-10T09:55:08.447 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-10T09:55:08.474 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-10T09:55:08.493 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-ply-3.11-14.el9.noarch 94/138 2026-03-10T09:55:08.515 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-pycparser-2.20-6.el9.noarch 95/138 2026-03-10T09:55:08.609 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-cffi-1.14.5-5.el9.x86_64 96/138 2026-03-10T09:55:08.626 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-cryptography-36.0.1-5.el9.x86_64 97/138 2026-03-10T09:55:08.656 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-pyOpenSSL-21.0.0-1.el9.noarch 98/138 2026-03-10T09:55:08.685 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-10T09:55:08.696 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-cheroot-10.0.1-4.el9.noarch 99/138 2026-03-10T09:55:08.755 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-10T09:55:08.782 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-10T09:55:08.801 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-ply-3.11-14.el9.noarch 94/138 2026-03-10T09:55:08.816 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-cherrypy-18.6.1-2.el9.noarch 100/138 2026-03-10T09:55:08.824 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-pycparser-2.20-6.el9.noarch 95/138 2026-03-10T09:55:08.829 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-asyncssh-2.13.2-5.el9.noarch 101/138 2026-03-10T09:55:08.836 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-bcrypt-3.2.2-1.el9.x86_64 102/138 2026-03-10T09:55:08.844 INFO:teuthology.orchestra.run.vm08.stdout: Installing : pciutils-3.7.0-7.el9.x86_64 103/138 2026-03-10T09:55:08.851 INFO:teuthology.orchestra.run.vm08.stdout: Installing : qatlib-25.08.0-2.el9.x86_64 104/138 2026-03-10T09:55:08.853 INFO:teuthology.orchestra.run.vm08.stdout: Installing : qatlib-service-25.08.0-2.el9.x86_64 105/138 2026-03-10T09:55:08.874 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 105/138 2026-03-10T09:55:08.921 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-cffi-1.14.5-5.el9.x86_64 96/138 2026-03-10T09:55:08.941 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-cryptography-36.0.1-5.el9.x86_64 97/138 2026-03-10T09:55:08.976 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-pyOpenSSL-21.0.0-1.el9.noarch 98/138 2026-03-10T09:55:09.016 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-cheroot-10.0.1-4.el9.noarch 99/138 2026-03-10T09:55:09.083 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-cherrypy-18.6.1-2.el9.noarch 100/138 2026-03-10T09:55:09.094 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-asyncssh-2.13.2-5.el9.noarch 101/138 2026-03-10T09:55:09.101 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-bcrypt-3.2.2-1.el9.x86_64 102/138 2026-03-10T09:55:09.108 INFO:teuthology.orchestra.run.vm02.stdout: Installing : pciutils-3.7.0-7.el9.x86_64 103/138 2026-03-10T09:55:09.115 INFO:teuthology.orchestra.run.vm02.stdout: Installing : qatlib-25.08.0-2.el9.x86_64 104/138 2026-03-10T09:55:09.118 INFO:teuthology.orchestra.run.vm02.stdout: Installing : qatlib-service-25.08.0-2.el9.x86_64 105/138 2026-03-10T09:55:09.141 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 105/138 2026-03-10T09:55:09.199 INFO:teuthology.orchestra.run.vm08.stdout: Installing : qatzip-libs-1.3.1-1.el9.x86_64 106/138 2026-03-10T09:55:09.207 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 107/138 2026-03-10T09:55:09.258 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 107/138 2026-03-10T09:55:09.258 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /usr/lib/systemd/system/ceph.target. 2026-03-10T09:55:09.258 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /usr/lib/systemd/system/ceph-crash.service. 2026-03-10T09:55:09.258 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:55:09.266 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 108/138 2026-03-10T09:55:09.475 INFO:teuthology.orchestra.run.vm02.stdout: Installing : qatzip-libs-1.3.1-1.el9.x86_64 106/138 2026-03-10T09:55:09.482 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 107/138 2026-03-10T09:55:09.528 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 107/138 2026-03-10T09:55:09.528 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /usr/lib/systemd/system/ceph.target. 2026-03-10T09:55:09.528 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /usr/lib/systemd/system/ceph-crash.service. 2026-03-10T09:55:09.528 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:55:09.532 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 108/138 2026-03-10T09:55:14.291 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 108/138 2026-03-10T09:55:14.291 INFO:teuthology.orchestra.run.vm01.stdout:skipping the directory /sys 2026-03-10T09:55:14.291 INFO:teuthology.orchestra.run.vm01.stdout:skipping the directory /proc 2026-03-10T09:55:14.291 INFO:teuthology.orchestra.run.vm01.stdout:skipping the directory /mnt 2026-03-10T09:55:14.291 INFO:teuthology.orchestra.run.vm01.stdout:skipping the directory /var/tmp 2026-03-10T09:55:14.291 INFO:teuthology.orchestra.run.vm01.stdout:skipping the directory /home 2026-03-10T09:55:14.291 INFO:teuthology.orchestra.run.vm01.stdout:skipping the directory /root 2026-03-10T09:55:14.291 INFO:teuthology.orchestra.run.vm01.stdout:skipping the directory /tmp 2026-03-10T09:55:14.291 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:55:14.425 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 109/138 2026-03-10T09:55:14.449 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 109/138 2026-03-10T09:55:14.449 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:55:14.449 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-10T09:55:14.449 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-10T09:55:14.449 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-10T09:55:14.449 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:55:14.698 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 110/138 2026-03-10T09:55:14.723 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 110/138 2026-03-10T09:55:14.723 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:55:14.723 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-10T09:55:14.723 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-10T09:55:14.723 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-10T09:55:14.723 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:55:14.822 INFO:teuthology.orchestra.run.vm01.stdout: Installing : mailcap-2.1.49-5.el9.noarch 111/138 2026-03-10T09:55:14.907 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libconfig-1.7.2-9.el9.x86_64 112/138 2026-03-10T09:55:15.103 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-10T09:55:15.113 INFO:teuthology.orchestra.run.vm01.stdout:Creating group 'qat' with GID 994. 2026-03-10T09:55:15.113 INFO:teuthology.orchestra.run.vm01.stdout:Creating group 'libstoragemgmt' with GID 993. 2026-03-10T09:55:15.113 INFO:teuthology.orchestra.run.vm01.stdout:Creating user 'libstoragemgmt' (daemon account for libstoragemgmt) with UID 993 and GID 993. 2026-03-10T09:55:15.113 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:55:15.495 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-10T09:55:15.649 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-10T09:55:15.649 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/libstoragemgmt.service → /usr/lib/systemd/system/libstoragemgmt.service. 2026-03-10T09:55:15.649 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:55:15.839 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 114/138 2026-03-10T09:55:16.321 INFO:teuthology.orchestra.run.vm01.stdout: Installing : cryptsetup-2.8.1-3.el9.x86_64 115/138 2026-03-10T09:55:16.437 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 116/138 2026-03-10T09:55:16.455 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 116/138 2026-03-10T09:55:16.455 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:55:16.455 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-10T09:55:16.455 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:55:16.964 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 108/138 2026-03-10T09:55:16.964 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /sys 2026-03-10T09:55:16.964 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /proc 2026-03-10T09:55:16.964 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /mnt 2026-03-10T09:55:16.964 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /var/tmp 2026-03-10T09:55:16.964 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /home 2026-03-10T09:55:16.964 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /root 2026-03-10T09:55:16.964 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /tmp 2026-03-10T09:55:16.964 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:55:17.006 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 108/138 2026-03-10T09:55:17.006 INFO:teuthology.orchestra.run.vm02.stdout:skipping the directory /sys 2026-03-10T09:55:17.006 INFO:teuthology.orchestra.run.vm02.stdout:skipping the directory /proc 2026-03-10T09:55:17.006 INFO:teuthology.orchestra.run.vm02.stdout:skipping the directory /mnt 2026-03-10T09:55:17.007 INFO:teuthology.orchestra.run.vm02.stdout:skipping the directory /var/tmp 2026-03-10T09:55:17.007 INFO:teuthology.orchestra.run.vm02.stdout:skipping the directory /home 2026-03-10T09:55:17.007 INFO:teuthology.orchestra.run.vm02.stdout:skipping the directory /root 2026-03-10T09:55:17.007 INFO:teuthology.orchestra.run.vm02.stdout:skipping the directory /tmp 2026-03-10T09:55:17.007 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:55:17.096 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 109/138 2026-03-10T09:55:17.122 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 109/138 2026-03-10T09:55:17.123 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:55:17.123 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-10T09:55:17.123 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-10T09:55:17.123 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-10T09:55:17.123 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:55:17.137 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 109/138 2026-03-10T09:55:17.163 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 109/138 2026-03-10T09:55:17.163 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:55:17.163 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-10T09:55:17.163 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-10T09:55:17.163 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-10T09:55:17.163 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:55:17.281 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 117/138 2026-03-10T09:55:17.310 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 117/138 2026-03-10T09:55:17.310 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:55:17.310 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-10T09:55:17.310 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-10T09:55:17.310 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-10T09:55:17.310 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:55:17.366 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 110/138 2026-03-10T09:55:17.379 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 118/138 2026-03-10T09:55:17.382 INFO:teuthology.orchestra.run.vm01.stdout: Installing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 118/138 2026-03-10T09:55:17.388 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 119/138 2026-03-10T09:55:17.389 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 110/138 2026-03-10T09:55:17.389 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:55:17.389 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-10T09:55:17.389 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-10T09:55:17.389 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-10T09:55:17.389 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:55:17.412 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 120/138 2026-03-10T09:55:17.442 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 121/138 2026-03-10T09:55:17.444 INFO:teuthology.orchestra.run.vm08.stdout: Installing : mailcap-2.1.49-5.el9.noarch 111/138 2026-03-10T09:55:17.465 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 110/138 2026-03-10T09:55:17.489 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 110/138 2026-03-10T09:55:17.489 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:55:17.489 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-10T09:55:17.489 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-10T09:55:17.489 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-10T09:55:17.489 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:55:17.500 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libconfig-1.7.2-9.el9.x86_64 112/138 2026-03-10T09:55:17.551 INFO:teuthology.orchestra.run.vm02.stdout: Installing : mailcap-2.1.49-5.el9.noarch 111/138 2026-03-10T09:55:17.593 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libconfig-1.7.2-9.el9.x86_64 112/138 2026-03-10T09:55:17.596 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-10T09:55:17.596 INFO:teuthology.orchestra.run.vm08.stdout:Creating group 'qat' with GID 994. 2026-03-10T09:55:17.596 INFO:teuthology.orchestra.run.vm08.stdout:Creating group 'libstoragemgmt' with GID 993. 2026-03-10T09:55:17.596 INFO:teuthology.orchestra.run.vm08.stdout:Creating user 'libstoragemgmt' (daemon account for libstoragemgmt) with UID 993 and GID 993. 2026-03-10T09:55:17.596 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:55:17.609 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-10T09:55:17.619 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-10T09:55:17.619 INFO:teuthology.orchestra.run.vm02.stdout:Creating group 'qat' with GID 994. 2026-03-10T09:55:17.619 INFO:teuthology.orchestra.run.vm02.stdout:Creating group 'libstoragemgmt' with GID 993. 2026-03-10T09:55:17.619 INFO:teuthology.orchestra.run.vm02.stdout:Creating user 'libstoragemgmt' (daemon account for libstoragemgmt) with UID 993 and GID 993. 2026-03-10T09:55:17.619 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:55:17.634 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-10T09:55:17.645 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-10T09:55:17.646 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/libstoragemgmt.service → /usr/lib/systemd/system/libstoragemgmt.service. 2026-03-10T09:55:17.646 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:55:17.701 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-10T09:55:17.701 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/libstoragemgmt.service → /usr/lib/systemd/system/libstoragemgmt.service. 2026-03-10T09:55:17.701 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:55:17.742 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 114/138 2026-03-10T09:55:17.788 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 114/138 2026-03-10T09:55:17.842 INFO:teuthology.orchestra.run.vm08.stdout: Installing : cryptsetup-2.8.1-3.el9.x86_64 115/138 2026-03-10T09:55:17.848 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 116/138 2026-03-10T09:55:17.864 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 116/138 2026-03-10T09:55:17.864 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:55:17.864 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-10T09:55:17.864 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:55:17.915 INFO:teuthology.orchestra.run.vm02.stdout: Installing : cryptsetup-2.8.1-3.el9.x86_64 115/138 2026-03-10T09:55:18.010 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 121/138 2026-03-10T09:55:18.071 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 116/138 2026-03-10T09:55:18.088 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 116/138 2026-03-10T09:55:18.088 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:55:18.088 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-10T09:55:18.088 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:55:18.125 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 122/138 2026-03-10T09:55:18.661 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 122/138 2026-03-10T09:55:18.664 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 123/138 2026-03-10T09:55:18.676 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 117/138 2026-03-10T09:55:18.703 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 117/138 2026-03-10T09:55:18.703 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:55:18.703 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-10T09:55:18.703 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-10T09:55:18.703 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-10T09:55:18.703 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:55:18.733 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 123/138 2026-03-10T09:55:18.858 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 118/138 2026-03-10T09:55:18.859 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 124/138 2026-03-10T09:55:18.888 INFO:teuthology.orchestra.run.vm08.stdout: Installing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 118/138 2026-03-10T09:55:18.894 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 125/138 2026-03-10T09:55:18.918 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 125/138 2026-03-10T09:55:18.918 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:55:18.918 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-10T09:55:18.918 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-10T09:55:18.918 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-10T09:55:18.918 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:55:18.993 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 126/138 2026-03-10T09:55:18.993 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 117/138 2026-03-10T09:55:18.995 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 119/138 2026-03-10T09:55:19.007 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 126/138 2026-03-10T09:55:19.019 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 120/138 2026-03-10T09:55:19.022 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 117/138 2026-03-10T09:55:19.022 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:55:19.022 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-10T09:55:19.022 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-10T09:55:19.022 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-10T09:55:19.022 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:55:19.044 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 121/138 2026-03-10T09:55:19.107 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 118/138 2026-03-10T09:55:19.110 INFO:teuthology.orchestra.run.vm02.stdout: Installing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 118/138 2026-03-10T09:55:19.118 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 119/138 2026-03-10T09:55:19.142 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 120/138 2026-03-10T09:55:19.160 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 121/138 2026-03-10T09:55:19.572 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 127/138 2026-03-10T09:55:19.577 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 128/138 2026-03-10T09:55:19.603 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 128/138 2026-03-10T09:55:19.603 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:55:19.603 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-10T09:55:19.603 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-10T09:55:19.603 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-10T09:55:19.603 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:55:19.616 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 129/138 2026-03-10T09:55:19.624 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 121/138 2026-03-10T09:55:19.630 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 122/138 2026-03-10T09:55:19.643 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 129/138 2026-03-10T09:55:19.643 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:55:19.643 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-10T09:55:19.643 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:55:19.725 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 121/138 2026-03-10T09:55:19.732 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 122/138 2026-03-10T09:55:19.803 INFO:teuthology.orchestra.run.vm01.stdout: Installing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 130/138 2026-03-10T09:55:19.829 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 130/138 2026-03-10T09:55:19.829 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:55:19.829 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-10T09:55:19.829 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-10T09:55:19.829 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-10T09:55:19.829 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:55:20.172 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 122/138 2026-03-10T09:55:20.174 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 123/138 2026-03-10T09:55:20.236 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 123/138 2026-03-10T09:55:20.284 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 122/138 2026-03-10T09:55:20.288 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 123/138 2026-03-10T09:55:20.292 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 124/138 2026-03-10T09:55:20.295 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 125/138 2026-03-10T09:55:20.324 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 125/138 2026-03-10T09:55:20.324 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:55:20.324 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-10T09:55:20.324 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-10T09:55:20.324 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-10T09:55:20.324 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:55:20.342 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 126/138 2026-03-10T09:55:20.352 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 123/138 2026-03-10T09:55:20.357 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 126/138 2026-03-10T09:55:20.413 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 124/138 2026-03-10T09:55:20.415 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 125/138 2026-03-10T09:55:20.442 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 125/138 2026-03-10T09:55:20.442 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:55:20.442 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-10T09:55:20.442 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-10T09:55:20.442 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-10T09:55:20.442 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:55:20.460 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 126/138 2026-03-10T09:55:20.469 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 126/138 2026-03-10T09:55:20.883 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 127/138 2026-03-10T09:55:20.886 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 128/138 2026-03-10T09:55:20.913 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 128/138 2026-03-10T09:55:20.913 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:55:20.913 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-10T09:55:20.913 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-10T09:55:20.913 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-10T09:55:20.913 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:55:20.924 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 129/138 2026-03-10T09:55:20.949 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 129/138 2026-03-10T09:55:20.949 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:55:20.949 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-10T09:55:20.949 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:55:20.990 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 127/138 2026-03-10T09:55:20.994 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 128/138 2026-03-10T09:55:21.018 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 128/138 2026-03-10T09:55:21.018 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:55:21.018 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-10T09:55:21.018 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-10T09:55:21.018 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-10T09:55:21.018 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:55:21.057 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 129/138 2026-03-10T09:55:21.080 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 129/138 2026-03-10T09:55:21.080 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:55:21.080 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-10T09:55:21.080 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:55:21.106 INFO:teuthology.orchestra.run.vm08.stdout: Installing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 130/138 2026-03-10T09:55:21.133 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 130/138 2026-03-10T09:55:21.133 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:55:21.133 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-10T09:55:21.133 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-10T09:55:21.133 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-10T09:55:21.133 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:55:21.256 INFO:teuthology.orchestra.run.vm02.stdout: Installing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 130/138 2026-03-10T09:55:21.283 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 130/138 2026-03-10T09:55:21.283 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:55:21.283 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-10T09:55:21.283 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-10T09:55:21.283 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-10T09:55:21.283 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:55:22.456 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 131/138 2026-03-10T09:55:22.468 INFO:teuthology.orchestra.run.vm01.stdout: Installing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 132/138 2026-03-10T09:55:22.475 INFO:teuthology.orchestra.run.vm01.stdout: Installing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 133/138 2026-03-10T09:55:22.533 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 134/138 2026-03-10T09:55:22.543 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 135/138 2026-03-10T09:55:22.547 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-jmespath-1.0.1-1.el9.noarch 136/138 2026-03-10T09:55:22.547 INFO:teuthology.orchestra.run.vm01.stdout: Cleanup : librbd1-2:16.2.4-5.el9.x86_64 137/138 2026-03-10T09:55:22.567 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: librbd1-2:16.2.4-5.el9.x86_64 137/138 2026-03-10T09:55:22.568 INFO:teuthology.orchestra.run.vm01.stdout: Cleanup : librados2-2:16.2.4-5.el9.x86_64 138/138 2026-03-10T09:55:23.814 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 131/138 2026-03-10T09:55:23.909 INFO:teuthology.orchestra.run.vm08.stdout: Installing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 132/138 2026-03-10T09:55:23.968 INFO:teuthology.orchestra.run.vm08.stdout: Installing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 133/138 2026-03-10T09:55:23.981 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 131/138 2026-03-10T09:55:23.993 INFO:teuthology.orchestra.run.vm02.stdout: Installing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 132/138 2026-03-10T09:55:23.998 INFO:teuthology.orchestra.run.vm02.stdout: Installing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 133/138 2026-03-10T09:55:24.005 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: librados2-2:16.2.4-5.el9.x86_64 138/138 2026-03-10T09:55:24.005 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/138 2026-03-10T09:55:24.005 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/138 2026-03-10T09:55:24.005 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/138 2026-03-10T09:55:24.005 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-10T09:55:24.005 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/138 2026-03-10T09:55:24.005 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 6/138 2026-03-10T09:55:24.005 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 7/138 2026-03-10T09:55:24.005 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/138 2026-03-10T09:55:24.005 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 9/138 2026-03-10T09:55:24.005 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 10/138 2026-03-10T09:55:24.005 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-10T09:55:24.005 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 12/138 2026-03-10T09:55:24.005 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 13/138 2026-03-10T09:55:24.005 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 14/138 2026-03-10T09:55:24.005 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 15/138 2026-03-10T09:55:24.005 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 16/138 2026-03-10T09:55:24.005 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 17/138 2026-03-10T09:55:24.006 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 18/138 2026-03-10T09:55:24.006 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 19/138 2026-03-10T09:55:24.006 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 20/138 2026-03-10T09:55:24.006 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 21/138 2026-03-10T09:55:24.006 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 22/138 2026-03-10T09:55:24.006 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 23/138 2026-03-10T09:55:24.006 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 24/138 2026-03-10T09:55:24.006 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 25/138 2026-03-10T09:55:24.006 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 26/138 2026-03-10T09:55:24.006 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 27/138 2026-03-10T09:55:24.006 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 28/138 2026-03-10T09:55:24.006 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 29/138 2026-03-10T09:55:24.006 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 30/138 2026-03-10T09:55:24.006 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 31/138 2026-03-10T09:55:24.006 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 32/138 2026-03-10T09:55:24.006 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 33/138 2026-03-10T09:55:24.009 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 34/138 2026-03-10T09:55:24.009 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 35/138 2026-03-10T09:55:24.009 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 36/138 2026-03-10T09:55:24.009 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 37/138 2026-03-10T09:55:24.009 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 38/138 2026-03-10T09:55:24.009 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 39/138 2026-03-10T09:55:24.009 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 40/138 2026-03-10T09:55:24.009 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 41/138 2026-03-10T09:55:24.009 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 42/138 2026-03-10T09:55:24.009 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 43/138 2026-03-10T09:55:24.009 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/138 2026-03-10T09:55:24.009 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 45/138 2026-03-10T09:55:24.009 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-ply-3.11-14.el9.noarch 46/138 2026-03-10T09:55:24.009 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 47/138 2026-03-10T09:55:24.009 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 48/138 2026-03-10T09:55:24.009 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 49/138 2026-03-10T09:55:24.009 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : unzip-6.0-59.el9.x86_64 50/138 2026-03-10T09:55:24.009 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : zip-3.0-35.el9.x86_64 51/138 2026-03-10T09:55:24.009 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 52/138 2026-03-10T09:55:24.009 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 53/138 2026-03-10T09:55:24.009 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 54/138 2026-03-10T09:55:24.009 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 55/138 2026-03-10T09:55:24.009 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 56/138 2026-03-10T09:55:24.009 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 57/138 2026-03-10T09:55:24.009 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 58/138 2026-03-10T09:55:24.009 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 59/138 2026-03-10T09:55:24.009 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 60/138 2026-03-10T09:55:24.009 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 61/138 2026-03-10T09:55:24.009 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 62/138 2026-03-10T09:55:24.009 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : lua-5.4.4-4.el9.x86_64 63/138 2026-03-10T09:55:24.009 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 64/138 2026-03-10T09:55:24.009 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 65/138 2026-03-10T09:55:24.009 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 66/138 2026-03-10T09:55:24.009 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 67/138 2026-03-10T09:55:24.009 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 68/138 2026-03-10T09:55:24.009 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 69/138 2026-03-10T09:55:24.009 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jmespath-1.0.1-1.el9.noarch 70/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 71/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 72/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 73/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 74/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 75/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 76/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 77/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 78/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 79/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 80/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 81/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 82/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 83/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 84/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 85/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 86/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 87/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 88/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 89/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 90/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 91/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 92/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 93/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 94/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 95/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 96/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 97/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 98/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 99/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 100/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 101/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 102/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 103/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 104/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 105/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 106/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 107/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 108/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 109/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 110/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 111/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 112/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 113/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 114/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 115/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 116/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 117/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 118/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 119/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 120/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 121/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 122/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 123/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 124/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 125/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 126/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 127/138 2026-03-10T09:55:24.010 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 128/138 2026-03-10T09:55:24.011 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 129/138 2026-03-10T09:55:24.011 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 130/138 2026-03-10T09:55:24.011 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-xmltodict-0.12.0-15.el9.noarch 131/138 2026-03-10T09:55:24.011 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 132/138 2026-03-10T09:55:24.011 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : re2-1:20211101-20.el9.x86_64 133/138 2026-03-10T09:55:24.011 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 134/138 2026-03-10T09:55:24.011 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 135/138 2026-03-10T09:55:24.011 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librados2-2:16.2.4-5.el9.x86_64 136/138 2026-03-10T09:55:24.011 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 137/138 2026-03-10T09:55:24.028 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 134/138 2026-03-10T09:55:24.038 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 135/138 2026-03-10T09:55:24.044 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-jmespath-1.0.1-1.el9.noarch 136/138 2026-03-10T09:55:24.044 INFO:teuthology.orchestra.run.vm08.stdout: Cleanup : librbd1-2:16.2.4-5.el9.x86_64 137/138 2026-03-10T09:55:24.058 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 134/138 2026-03-10T09:55:24.065 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: librbd1-2:16.2.4-5.el9.x86_64 137/138 2026-03-10T09:55:24.065 INFO:teuthology.orchestra.run.vm08.stdout: Cleanup : librados2-2:16.2.4-5.el9.x86_64 138/138 2026-03-10T09:55:24.068 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 135/138 2026-03-10T09:55:24.073 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-jmespath-1.0.1-1.el9.noarch 136/138 2026-03-10T09:55:24.073 INFO:teuthology.orchestra.run.vm02.stdout: Cleanup : librbd1-2:16.2.4-5.el9.x86_64 137/138 2026-03-10T09:55:24.089 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: librbd1-2:16.2.4-5.el9.x86_64 137/138 2026-03-10T09:55:24.089 INFO:teuthology.orchestra.run.vm02.stdout: Cleanup : librados2-2:16.2.4-5.el9.x86_64 138/138 2026-03-10T09:55:24.122 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librbd1-2:16.2.4-5.el9.x86_64 138/138 2026-03-10T09:55:24.122 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:55:24.122 INFO:teuthology.orchestra.run.vm01.stdout:Upgraded: 2026-03-10T09:55:24.122 INFO:teuthology.orchestra.run.vm01.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:24.122 INFO:teuthology.orchestra.run.vm01.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:24.122 INFO:teuthology.orchestra.run.vm01.stdout:Installed: 2026-03-10T09:55:24.122 INFO:teuthology.orchestra.run.vm01.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-10T09:55:24.122 INFO:teuthology.orchestra.run.vm01.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-10T09:55:24.122 INFO:teuthology.orchestra.run.vm01.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:24.122 INFO:teuthology.orchestra.run.vm01.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:24.122 INFO:teuthology.orchestra.run.vm01.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:24.122 INFO:teuthology.orchestra.run.vm01.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:24.122 INFO:teuthology.orchestra.run.vm01.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:55:24.122 INFO:teuthology.orchestra.run.vm01.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:24.122 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:24.122 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:24.122 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:55:24.122 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:55:24.122 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:55:24.122 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:55:24.122 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:55:24.122 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:24.122 INFO:teuthology.orchestra.run.vm01.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:24.122 INFO:teuthology.orchestra.run.vm01.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:55:24.122 INFO:teuthology.orchestra.run.vm01.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:24.122 INFO:teuthology.orchestra.run.vm01.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:24.122 INFO:teuthology.orchestra.run.vm01.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:24.123 INFO:teuthology.orchestra.run.vm01.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:55:24.123 INFO:teuthology.orchestra.run.vm01.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:55:24.123 INFO:teuthology.orchestra.run.vm01.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-10T09:55:24.123 INFO:teuthology.orchestra.run.vm01.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-10T09:55:24.123 INFO:teuthology.orchestra.run.vm01.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-10T09:55:24.123 INFO:teuthology.orchestra.run.vm01.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-10T09:55:24.123 INFO:teuthology.orchestra.run.vm01.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-10T09:55:24.123 INFO:teuthology.orchestra.run.vm01.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-10T09:55:24.123 INFO:teuthology.orchestra.run.vm01.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-10T09:55:24.123 INFO:teuthology.orchestra.run.vm01.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-10T09:55:24.123 INFO:teuthology.orchestra.run.vm01.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-10T09:55:24.123 INFO:teuthology.orchestra.run.vm01.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:24.123 INFO:teuthology.orchestra.run.vm01.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:24.123 INFO:teuthology.orchestra.run.vm01.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:24.123 INFO:teuthology.orchestra.run.vm01.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-10T09:55:24.123 INFO:teuthology.orchestra.run.vm01.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-10T09:55:24.123 INFO:teuthology.orchestra.run.vm01.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-10T09:55:24.123 INFO:teuthology.orchestra.run.vm01.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-10T09:55:24.123 INFO:teuthology.orchestra.run.vm01.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-10T09:55:24.123 INFO:teuthology.orchestra.run.vm01.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-10T09:55:24.123 INFO:teuthology.orchestra.run.vm01.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-10T09:55:24.123 INFO:teuthology.orchestra.run.vm01.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:24.123 INFO:teuthology.orchestra.run.vm01.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:24.123 INFO:teuthology.orchestra.run.vm01.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-10T09:55:24.123 INFO:teuthology.orchestra.run.vm01.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:24.123 INFO:teuthology.orchestra.run.vm01.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T09:55:24.123 INFO:teuthology.orchestra.run.vm01.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-10T09:55:24.123 INFO:teuthology.orchestra.run.vm01.stdout: libxslt-1.1.34-12.el9.x86_64 2026-03-10T09:55:24.123 INFO:teuthology.orchestra.run.vm01.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-10T09:55:24.123 INFO:teuthology.orchestra.run.vm01.stdout: lua-5.4.4-4.el9.x86_64 2026-03-10T09:55:24.123 INFO:teuthology.orchestra.run.vm01.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-10T09:55:24.123 INFO:teuthology.orchestra.run.vm01.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-10T09:55:24.123 INFO:teuthology.orchestra.run.vm01.stdout: mailcap-2.1.49-5.el9.noarch 2026-03-10T09:55:24.123 INFO:teuthology.orchestra.run.vm01.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-10T09:55:24.123 INFO:teuthology.orchestra.run.vm01.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-10T09:55:24.123 INFO:teuthology.orchestra.run.vm01.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-10T09:55:24.123 INFO:teuthology.orchestra.run.vm01.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-10T09:55:24.123 INFO:teuthology.orchestra.run.vm01.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-10T09:55:24.123 INFO:teuthology.orchestra.run.vm01.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-10T09:55:24.123 INFO:teuthology.orchestra.run.vm01.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-10T09:55:24.123 INFO:teuthology.orchestra.run.vm01.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-10T09:55:24.123 INFO:teuthology.orchestra.run.vm01.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-10T09:55:24.123 INFO:teuthology.orchestra.run.vm01.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-10T09:55:24.123 INFO:teuthology.orchestra.run.vm01.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-10T09:55:24.123 INFO:teuthology.orchestra.run.vm01.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-10T09:55:24.123 INFO:teuthology.orchestra.run.vm01.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:24.124 INFO:teuthology.orchestra.run.vm01.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:24.124 INFO:teuthology.orchestra.run.vm01.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:24.124 INFO:teuthology.orchestra.run.vm01.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-10T09:55:24.124 INFO:teuthology.orchestra.run.vm01.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-10T09:55:24.124 INFO:teuthology.orchestra.run.vm01.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-10T09:55:24.124 INFO:teuthology.orchestra.run.vm01.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-10T09:55:24.124 INFO:teuthology.orchestra.run.vm01.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-10T09:55:24.124 INFO:teuthology.orchestra.run.vm01.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-10T09:55:24.124 INFO:teuthology.orchestra.run.vm01.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-10T09:55:24.124 INFO:teuthology.orchestra.run.vm01.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-10T09:55:24.124 INFO:teuthology.orchestra.run.vm01.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-10T09:55:24.124 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-10T09:55:24.124 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-10T09:55:24.124 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-10T09:55:24.124 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-10T09:55:24.124 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-10T09:55:24.124 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-10T09:55:24.124 INFO:teuthology.orchestra.run.vm01.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-10T09:55:24.124 INFO:teuthology.orchestra.run.vm01.stdout: python3-jmespath-1.0.1-1.el9.noarch 2026-03-10T09:55:24.124 INFO:teuthology.orchestra.run.vm01.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-10T09:55:24.124 INFO:teuthology.orchestra.run.vm01.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T09:55:24.124 INFO:teuthology.orchestra.run.vm01.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-10T09:55:24.124 INFO:teuthology.orchestra.run.vm01.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-10T09:55:24.124 INFO:teuthology.orchestra.run.vm01.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-10T09:55:24.124 INFO:teuthology.orchestra.run.vm01.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-10T09:55:24.124 INFO:teuthology.orchestra.run.vm01.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-10T09:55:24.124 INFO:teuthology.orchestra.run.vm01.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-10T09:55:24.124 INFO:teuthology.orchestra.run.vm01.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-10T09:55:24.124 INFO:teuthology.orchestra.run.vm01.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-10T09:55:24.124 INFO:teuthology.orchestra.run.vm01.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-10T09:55:24.124 INFO:teuthology.orchestra.run.vm01.stdout: python3-ply-3.11-14.el9.noarch 2026-03-10T09:55:24.124 INFO:teuthology.orchestra.run.vm01.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-10T09:55:24.124 INFO:teuthology.orchestra.run.vm01.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-10T09:55:24.124 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-10T09:55:24.124 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-10T09:55:24.124 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-10T09:55:24.124 INFO:teuthology.orchestra.run.vm01.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-10T09:55:24.124 INFO:teuthology.orchestra.run.vm01.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:24.124 INFO:teuthology.orchestra.run.vm01.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:24.124 INFO:teuthology.orchestra.run.vm01.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-10T09:55:24.124 INFO:teuthology.orchestra.run.vm01.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-10T09:55:24.124 INFO:teuthology.orchestra.run.vm01.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-10T09:55:24.124 INFO:teuthology.orchestra.run.vm01.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:24.125 INFO:teuthology.orchestra.run.vm01.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-10T09:55:24.125 INFO:teuthology.orchestra.run.vm01.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-10T09:55:24.125 INFO:teuthology.orchestra.run.vm01.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-10T09:55:24.125 INFO:teuthology.orchestra.run.vm01.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-10T09:55:24.125 INFO:teuthology.orchestra.run.vm01.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-10T09:55:24.125 INFO:teuthology.orchestra.run.vm01.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-10T09:55:24.125 INFO:teuthology.orchestra.run.vm01.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-10T09:55:24.125 INFO:teuthology.orchestra.run.vm01.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-10T09:55:24.125 INFO:teuthology.orchestra.run.vm01.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-10T09:55:24.125 INFO:teuthology.orchestra.run.vm01.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-10T09:55:24.125 INFO:teuthology.orchestra.run.vm01.stdout: python3-xmltodict-0.12.0-15.el9.noarch 2026-03-10T09:55:24.125 INFO:teuthology.orchestra.run.vm01.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-10T09:55:24.125 INFO:teuthology.orchestra.run.vm01.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-10T09:55:24.125 INFO:teuthology.orchestra.run.vm01.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-10T09:55:24.125 INFO:teuthology.orchestra.run.vm01.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-10T09:55:24.125 INFO:teuthology.orchestra.run.vm01.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:24.125 INFO:teuthology.orchestra.run.vm01.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:24.125 INFO:teuthology.orchestra.run.vm01.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:24.125 INFO:teuthology.orchestra.run.vm01.stdout: re2-1:20211101-20.el9.x86_64 2026-03-10T09:55:24.125 INFO:teuthology.orchestra.run.vm01.stdout: socat-1.7.4.1-8.el9.x86_64 2026-03-10T09:55:24.125 INFO:teuthology.orchestra.run.vm01.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-10T09:55:24.125 INFO:teuthology.orchestra.run.vm01.stdout: unzip-6.0-59.el9.x86_64 2026-03-10T09:55:24.125 INFO:teuthology.orchestra.run.vm01.stdout: xmlstarlet-1.6.1-20.el9.x86_64 2026-03-10T09:55:24.125 INFO:teuthology.orchestra.run.vm01.stdout: zip-3.0-35.el9.x86_64 2026-03-10T09:55:24.125 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:55:24.125 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T09:55:24.228 DEBUG:teuthology.parallel:result is None 2026-03-10T09:55:26.206 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: librados2-2:16.2.4-5.el9.x86_64 138/138 2026-03-10T09:55:26.206 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/138 2026-03-10T09:55:26.206 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/138 2026-03-10T09:55:26.206 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/138 2026-03-10T09:55:26.206 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-10T09:55:26.207 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/138 2026-03-10T09:55:26.207 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 6/138 2026-03-10T09:55:26.207 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 7/138 2026-03-10T09:55:26.207 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/138 2026-03-10T09:55:26.207 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 9/138 2026-03-10T09:55:26.207 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 10/138 2026-03-10T09:55:26.207 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-10T09:55:26.207 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 12/138 2026-03-10T09:55:26.207 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 13/138 2026-03-10T09:55:26.207 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 14/138 2026-03-10T09:55:26.207 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 15/138 2026-03-10T09:55:26.207 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 16/138 2026-03-10T09:55:26.207 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 17/138 2026-03-10T09:55:26.207 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 18/138 2026-03-10T09:55:26.207 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 19/138 2026-03-10T09:55:26.207 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 20/138 2026-03-10T09:55:26.207 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 21/138 2026-03-10T09:55:26.207 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 22/138 2026-03-10T09:55:26.207 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 23/138 2026-03-10T09:55:26.207 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 24/138 2026-03-10T09:55:26.207 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 25/138 2026-03-10T09:55:26.208 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 26/138 2026-03-10T09:55:26.208 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 27/138 2026-03-10T09:55:26.208 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 28/138 2026-03-10T09:55:26.208 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 29/138 2026-03-10T09:55:26.208 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 30/138 2026-03-10T09:55:26.208 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 31/138 2026-03-10T09:55:26.208 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 32/138 2026-03-10T09:55:26.208 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 33/138 2026-03-10T09:55:26.208 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 34/138 2026-03-10T09:55:26.208 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 35/138 2026-03-10T09:55:26.208 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 36/138 2026-03-10T09:55:26.208 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 37/138 2026-03-10T09:55:26.208 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 38/138 2026-03-10T09:55:26.208 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 39/138 2026-03-10T09:55:26.208 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 40/138 2026-03-10T09:55:26.208 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 41/138 2026-03-10T09:55:26.208 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 42/138 2026-03-10T09:55:26.208 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 43/138 2026-03-10T09:55:26.208 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/138 2026-03-10T09:55:26.208 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 45/138 2026-03-10T09:55:26.208 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-ply-3.11-14.el9.noarch 46/138 2026-03-10T09:55:26.208 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 47/138 2026-03-10T09:55:26.208 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 48/138 2026-03-10T09:55:26.208 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: librados2-2:16.2.4-5.el9.x86_64 138/138 2026-03-10T09:55:26.209 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/138 2026-03-10T09:55:26.209 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/138 2026-03-10T09:55:26.209 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/138 2026-03-10T09:55:26.209 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 49/138 2026-03-10T09:55:26.209 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : unzip-6.0-59.el9.x86_64 50/138 2026-03-10T09:55:26.209 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : zip-3.0-35.el9.x86_64 51/138 2026-03-10T09:55:26.209 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 52/138 2026-03-10T09:55:26.209 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 53/138 2026-03-10T09:55:26.209 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 54/138 2026-03-10T09:55:26.209 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 55/138 2026-03-10T09:55:26.209 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 56/138 2026-03-10T09:55:26.209 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 57/138 2026-03-10T09:55:26.209 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 58/138 2026-03-10T09:55:26.209 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 59/138 2026-03-10T09:55:26.210 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 60/138 2026-03-10T09:55:26.210 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 61/138 2026-03-10T09:55:26.210 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 62/138 2026-03-10T09:55:26.210 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : lua-5.4.4-4.el9.x86_64 63/138 2026-03-10T09:55:26.210 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 64/138 2026-03-10T09:55:26.210 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 65/138 2026-03-10T09:55:26.210 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 66/138 2026-03-10T09:55:26.210 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 67/138 2026-03-10T09:55:26.210 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 68/138 2026-03-10T09:55:26.210 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 69/138 2026-03-10T09:55:26.210 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jmespath-1.0.1-1.el9.noarch 70/138 2026-03-10T09:55:26.210 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 71/138 2026-03-10T09:55:26.210 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 72/138 2026-03-10T09:55:26.210 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 73/138 2026-03-10T09:55:26.210 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 74/138 2026-03-10T09:55:26.210 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 75/138 2026-03-10T09:55:26.210 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 76/138 2026-03-10T09:55:26.210 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 77/138 2026-03-10T09:55:26.210 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 78/138 2026-03-10T09:55:26.210 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 79/138 2026-03-10T09:55:26.210 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 80/138 2026-03-10T09:55:26.210 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 81/138 2026-03-10T09:55:26.210 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 82/138 2026-03-10T09:55:26.210 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 83/138 2026-03-10T09:55:26.210 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 84/138 2026-03-10T09:55:26.210 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 85/138 2026-03-10T09:55:26.210 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 86/138 2026-03-10T09:55:26.210 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 87/138 2026-03-10T09:55:26.210 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 88/138 2026-03-10T09:55:26.211 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-10T09:55:26.211 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/138 2026-03-10T09:55:26.211 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 6/138 2026-03-10T09:55:26.211 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 7/138 2026-03-10T09:55:26.211 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/138 2026-03-10T09:55:26.211 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 9/138 2026-03-10T09:55:26.211 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 10/138 2026-03-10T09:55:26.211 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-10T09:55:26.211 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 12/138 2026-03-10T09:55:26.211 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 13/138 2026-03-10T09:55:26.211 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 14/138 2026-03-10T09:55:26.211 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 15/138 2026-03-10T09:55:26.211 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 16/138 2026-03-10T09:55:26.211 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 17/138 2026-03-10T09:55:26.211 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 18/138 2026-03-10T09:55:26.211 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 19/138 2026-03-10T09:55:26.211 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 20/138 2026-03-10T09:55:26.211 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 21/138 2026-03-10T09:55:26.211 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 22/138 2026-03-10T09:55:26.211 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 23/138 2026-03-10T09:55:26.211 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 24/138 2026-03-10T09:55:26.211 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 25/138 2026-03-10T09:55:26.212 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 89/138 2026-03-10T09:55:26.212 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 90/138 2026-03-10T09:55:26.212 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 91/138 2026-03-10T09:55:26.212 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 92/138 2026-03-10T09:55:26.212 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 93/138 2026-03-10T09:55:26.212 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 94/138 2026-03-10T09:55:26.212 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 95/138 2026-03-10T09:55:26.212 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 96/138 2026-03-10T09:55:26.212 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 97/138 2026-03-10T09:55:26.212 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 98/138 2026-03-10T09:55:26.212 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 99/138 2026-03-10T09:55:26.212 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 100/138 2026-03-10T09:55:26.212 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 101/138 2026-03-10T09:55:26.212 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 102/138 2026-03-10T09:55:26.212 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 103/138 2026-03-10T09:55:26.212 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 104/138 2026-03-10T09:55:26.212 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 105/138 2026-03-10T09:55:26.212 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 106/138 2026-03-10T09:55:26.212 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 107/138 2026-03-10T09:55:26.212 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 108/138 2026-03-10T09:55:26.212 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 109/138 2026-03-10T09:55:26.212 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 110/138 2026-03-10T09:55:26.212 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 111/138 2026-03-10T09:55:26.212 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 112/138 2026-03-10T09:55:26.212 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 113/138 2026-03-10T09:55:26.212 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 114/138 2026-03-10T09:55:26.212 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 115/138 2026-03-10T09:55:26.213 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 116/138 2026-03-10T09:55:26.213 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 117/138 2026-03-10T09:55:26.213 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 118/138 2026-03-10T09:55:26.213 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 119/138 2026-03-10T09:55:26.213 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 120/138 2026-03-10T09:55:26.213 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 121/138 2026-03-10T09:55:26.213 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 122/138 2026-03-10T09:55:26.213 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 123/138 2026-03-10T09:55:26.213 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 124/138 2026-03-10T09:55:26.213 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 125/138 2026-03-10T09:55:26.213 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 126/138 2026-03-10T09:55:26.213 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 127/138 2026-03-10T09:55:26.213 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 128/138 2026-03-10T09:55:26.213 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 129/138 2026-03-10T09:55:26.213 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 130/138 2026-03-10T09:55:26.213 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-xmltodict-0.12.0-15.el9.noarch 131/138 2026-03-10T09:55:26.213 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 132/138 2026-03-10T09:55:26.213 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : re2-1:20211101-20.el9.x86_64 133/138 2026-03-10T09:55:26.213 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 134/138 2026-03-10T09:55:26.213 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 135/138 2026-03-10T09:55:26.213 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librados2-2:16.2.4-5.el9.x86_64 136/138 2026-03-10T09:55:26.213 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 137/138 2026-03-10T09:55:26.215 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 26/138 2026-03-10T09:55:26.215 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 27/138 2026-03-10T09:55:26.215 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 28/138 2026-03-10T09:55:26.215 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 29/138 2026-03-10T09:55:26.215 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 30/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 31/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 32/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 33/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 34/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 35/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 36/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 37/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 38/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 39/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 40/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 41/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 42/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 43/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 45/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-ply-3.11-14.el9.noarch 46/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 47/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 48/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 49/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : unzip-6.0-59.el9.x86_64 50/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : zip-3.0-35.el9.x86_64 51/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 52/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 53/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 54/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 55/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 56/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 57/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 58/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 59/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 60/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 61/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 62/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : lua-5.4.4-4.el9.x86_64 63/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 64/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 65/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 66/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 67/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 68/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 69/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jmespath-1.0.1-1.el9.noarch 70/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 71/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 72/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 73/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 74/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 75/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 76/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 77/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 78/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 79/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 80/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 81/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 82/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 83/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 84/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 85/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 86/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 87/138 2026-03-10T09:55:26.216 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 88/138 2026-03-10T09:55:26.217 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 89/138 2026-03-10T09:55:26.217 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 90/138 2026-03-10T09:55:26.217 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 91/138 2026-03-10T09:55:26.217 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 92/138 2026-03-10T09:55:26.217 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 93/138 2026-03-10T09:55:26.217 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 94/138 2026-03-10T09:55:26.217 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 95/138 2026-03-10T09:55:26.217 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 96/138 2026-03-10T09:55:26.217 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 97/138 2026-03-10T09:55:26.217 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 98/138 2026-03-10T09:55:26.217 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 99/138 2026-03-10T09:55:26.217 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 100/138 2026-03-10T09:55:26.217 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 101/138 2026-03-10T09:55:26.217 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 102/138 2026-03-10T09:55:26.217 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 103/138 2026-03-10T09:55:26.217 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 104/138 2026-03-10T09:55:26.217 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 105/138 2026-03-10T09:55:26.217 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 106/138 2026-03-10T09:55:26.217 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 107/138 2026-03-10T09:55:26.217 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 108/138 2026-03-10T09:55:26.217 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 109/138 2026-03-10T09:55:26.217 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 110/138 2026-03-10T09:55:26.217 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 111/138 2026-03-10T09:55:26.217 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 112/138 2026-03-10T09:55:26.217 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 113/138 2026-03-10T09:55:26.217 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 114/138 2026-03-10T09:55:26.217 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 115/138 2026-03-10T09:55:26.217 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 116/138 2026-03-10T09:55:26.217 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 117/138 2026-03-10T09:55:26.217 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 118/138 2026-03-10T09:55:26.217 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 119/138 2026-03-10T09:55:26.217 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 120/138 2026-03-10T09:55:26.217 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 121/138 2026-03-10T09:55:26.217 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 122/138 2026-03-10T09:55:26.217 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 123/138 2026-03-10T09:55:26.217 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 124/138 2026-03-10T09:55:26.217 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 125/138 2026-03-10T09:55:26.217 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 126/138 2026-03-10T09:55:26.217 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 127/138 2026-03-10T09:55:26.217 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 128/138 2026-03-10T09:55:26.217 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 129/138 2026-03-10T09:55:26.217 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 130/138 2026-03-10T09:55:26.217 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-xmltodict-0.12.0-15.el9.noarch 131/138 2026-03-10T09:55:26.217 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 132/138 2026-03-10T09:55:26.217 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : re2-1:20211101-20.el9.x86_64 133/138 2026-03-10T09:55:26.217 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 134/138 2026-03-10T09:55:26.217 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 135/138 2026-03-10T09:55:26.217 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librados2-2:16.2.4-5.el9.x86_64 136/138 2026-03-10T09:55:26.217 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 137/138 2026-03-10T09:55:26.323 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librbd1-2:16.2.4-5.el9.x86_64 138/138 2026-03-10T09:55:26.323 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:55:26.323 INFO:teuthology.orchestra.run.vm08.stdout:Upgraded: 2026-03-10T09:55:26.323 INFO:teuthology.orchestra.run.vm08.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.323 INFO:teuthology.orchestra.run.vm08.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.323 INFO:teuthology.orchestra.run.vm08.stdout:Installed: 2026-03-10T09:55:26.323 INFO:teuthology.orchestra.run.vm08.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-10T09:55:26.323 INFO:teuthology.orchestra.run.vm08.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-10T09:55:26.323 INFO:teuthology.orchestra.run.vm08.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.323 INFO:teuthology.orchestra.run.vm08.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.323 INFO:teuthology.orchestra.run.vm08.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.323 INFO:teuthology.orchestra.run.vm08.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.323 INFO:teuthology.orchestra.run.vm08.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:55:26.323 INFO:teuthology.orchestra.run.vm08.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.323 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.323 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.323 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:55:26.323 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:55:26.323 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:55:26.323 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:55:26.323 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:55:26.323 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.323 INFO:teuthology.orchestra.run.vm08.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.323 INFO:teuthology.orchestra.run.vm08.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:55:26.323 INFO:teuthology.orchestra.run.vm08.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.323 INFO:teuthology.orchestra.run.vm08.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.323 INFO:teuthology.orchestra.run.vm08.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.323 INFO:teuthology.orchestra.run.vm08.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:55:26.323 INFO:teuthology.orchestra.run.vm08.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:55:26.323 INFO:teuthology.orchestra.run.vm08.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-10T09:55:26.323 INFO:teuthology.orchestra.run.vm08.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-10T09:55:26.323 INFO:teuthology.orchestra.run.vm08.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-10T09:55:26.323 INFO:teuthology.orchestra.run.vm08.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-10T09:55:26.323 INFO:teuthology.orchestra.run.vm08.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-10T09:55:26.323 INFO:teuthology.orchestra.run.vm08.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-10T09:55:26.323 INFO:teuthology.orchestra.run.vm08.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-10T09:55:26.323 INFO:teuthology.orchestra.run.vm08.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-10T09:55:26.323 INFO:teuthology.orchestra.run.vm08.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-10T09:55:26.323 INFO:teuthology.orchestra.run.vm08.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.323 INFO:teuthology.orchestra.run.vm08.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.323 INFO:teuthology.orchestra.run.vm08.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.323 INFO:teuthology.orchestra.run.vm08.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-10T09:55:26.323 INFO:teuthology.orchestra.run.vm08.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-10T09:55:26.323 INFO:teuthology.orchestra.run.vm08.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-10T09:55:26.323 INFO:teuthology.orchestra.run.vm08.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-10T09:55:26.323 INFO:teuthology.orchestra.run.vm08.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-10T09:55:26.323 INFO:teuthology.orchestra.run.vm08.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-10T09:55:26.323 INFO:teuthology.orchestra.run.vm08.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-10T09:55:26.323 INFO:teuthology.orchestra.run.vm08.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: libxslt-1.1.34-12.el9.x86_64 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: lua-5.4.4-4.el9.x86_64 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: mailcap-2.1.49-5.el9.noarch 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: python3-jmespath-1.0.1-1.el9.noarch 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: python3-ply-3.11-14.el9.noarch 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-10T09:55:26.324 INFO:teuthology.orchestra.run.vm08.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.325 INFO:teuthology.orchestra.run.vm08.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.325 INFO:teuthology.orchestra.run.vm08.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-10T09:55:26.325 INFO:teuthology.orchestra.run.vm08.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-10T09:55:26.325 INFO:teuthology.orchestra.run.vm08.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-10T09:55:26.325 INFO:teuthology.orchestra.run.vm08.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.325 INFO:teuthology.orchestra.run.vm08.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-10T09:55:26.325 INFO:teuthology.orchestra.run.vm08.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-10T09:55:26.325 INFO:teuthology.orchestra.run.vm08.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-10T09:55:26.325 INFO:teuthology.orchestra.run.vm08.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-10T09:55:26.325 INFO:teuthology.orchestra.run.vm08.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-10T09:55:26.325 INFO:teuthology.orchestra.run.vm08.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-10T09:55:26.325 INFO:teuthology.orchestra.run.vm08.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-10T09:55:26.325 INFO:teuthology.orchestra.run.vm08.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-10T09:55:26.325 INFO:teuthology.orchestra.run.vm08.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-10T09:55:26.325 INFO:teuthology.orchestra.run.vm08.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-10T09:55:26.325 INFO:teuthology.orchestra.run.vm08.stdout: python3-xmltodict-0.12.0-15.el9.noarch 2026-03-10T09:55:26.325 INFO:teuthology.orchestra.run.vm08.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-10T09:55:26.325 INFO:teuthology.orchestra.run.vm08.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-10T09:55:26.325 INFO:teuthology.orchestra.run.vm08.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-10T09:55:26.325 INFO:teuthology.orchestra.run.vm08.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-10T09:55:26.325 INFO:teuthology.orchestra.run.vm08.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.325 INFO:teuthology.orchestra.run.vm08.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.325 INFO:teuthology.orchestra.run.vm08.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.325 INFO:teuthology.orchestra.run.vm08.stdout: re2-1:20211101-20.el9.x86_64 2026-03-10T09:55:26.325 INFO:teuthology.orchestra.run.vm08.stdout: socat-1.7.4.1-8.el9.x86_64 2026-03-10T09:55:26.325 INFO:teuthology.orchestra.run.vm08.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-10T09:55:26.325 INFO:teuthology.orchestra.run.vm08.stdout: unzip-6.0-59.el9.x86_64 2026-03-10T09:55:26.325 INFO:teuthology.orchestra.run.vm08.stdout: xmlstarlet-1.6.1-20.el9.x86_64 2026-03-10T09:55:26.325 INFO:teuthology.orchestra.run.vm08.stdout: zip-3.0-35.el9.x86_64 2026-03-10T09:55:26.325 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:55:26.325 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T09:55:26.325 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librbd1-2:16.2.4-5.el9.x86_64 138/138 2026-03-10T09:55:26.325 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:55:26.325 INFO:teuthology.orchestra.run.vm02.stdout:Upgraded: 2026-03-10T09:55:26.325 INFO:teuthology.orchestra.run.vm02.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.325 INFO:teuthology.orchestra.run.vm02.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.325 INFO:teuthology.orchestra.run.vm02.stdout:Installed: 2026-03-10T09:55:26.325 INFO:teuthology.orchestra.run.vm02.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: libxslt-1.1.34-12.el9.x86_64 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: lua-5.4.4-4.el9.x86_64 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: mailcap-2.1.49-5.el9.noarch 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-10T09:55:26.326 INFO:teuthology.orchestra.run.vm02.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-jmespath-1.0.1-1.el9.noarch 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-ply-3.11-14.el9.noarch 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-xmltodict-0.12.0-15.el9.noarch 2026-03-10T09:55:26.327 INFO:teuthology.orchestra.run.vm02.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-10T09:55:26.328 INFO:teuthology.orchestra.run.vm02.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-10T09:55:26.328 INFO:teuthology.orchestra.run.vm02.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-10T09:55:26.328 INFO:teuthology.orchestra.run.vm02.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-10T09:55:26.328 INFO:teuthology.orchestra.run.vm02.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.328 INFO:teuthology.orchestra.run.vm02.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.328 INFO:teuthology.orchestra.run.vm02.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:55:26.328 INFO:teuthology.orchestra.run.vm02.stdout: re2-1:20211101-20.el9.x86_64 2026-03-10T09:55:26.328 INFO:teuthology.orchestra.run.vm02.stdout: socat-1.7.4.1-8.el9.x86_64 2026-03-10T09:55:26.328 INFO:teuthology.orchestra.run.vm02.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-10T09:55:26.328 INFO:teuthology.orchestra.run.vm02.stdout: unzip-6.0-59.el9.x86_64 2026-03-10T09:55:26.328 INFO:teuthology.orchestra.run.vm02.stdout: xmlstarlet-1.6.1-20.el9.x86_64 2026-03-10T09:55:26.328 INFO:teuthology.orchestra.run.vm02.stdout: zip-3.0-35.el9.x86_64 2026-03-10T09:55:26.328 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:55:26.328 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T09:55:26.429 DEBUG:teuthology.parallel:result is None 2026-03-10T09:55:26.447 DEBUG:teuthology.parallel:result is None 2026-03-10T09:55:26.447 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T09:55:27.132 DEBUG:teuthology.orchestra.run.vm01:> rpm -q ceph --qf '%{VERSION}-%{RELEASE}' 2026-03-10T09:55:27.154 INFO:teuthology.orchestra.run.vm01.stdout:19.2.3-678.ge911bdeb.el9 2026-03-10T09:55:27.154 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678.ge911bdeb.el9 2026-03-10T09:55:27.154 INFO:teuthology.task.install:The correct ceph version 19.2.3-678.ge911bdeb is installed. 2026-03-10T09:55:27.155 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T09:55:27.755 DEBUG:teuthology.orchestra.run.vm02:> rpm -q ceph --qf '%{VERSION}-%{RELEASE}' 2026-03-10T09:55:27.776 INFO:teuthology.orchestra.run.vm02.stdout:19.2.3-678.ge911bdeb.el9 2026-03-10T09:55:27.777 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678.ge911bdeb.el9 2026-03-10T09:55:27.777 INFO:teuthology.task.install:The correct ceph version 19.2.3-678.ge911bdeb is installed. 2026-03-10T09:55:27.778 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T09:55:28.359 DEBUG:teuthology.orchestra.run.vm08:> rpm -q ceph --qf '%{VERSION}-%{RELEASE}' 2026-03-10T09:55:28.380 INFO:teuthology.orchestra.run.vm08.stdout:19.2.3-678.ge911bdeb.el9 2026-03-10T09:55:28.380 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678.ge911bdeb.el9 2026-03-10T09:55:28.380 INFO:teuthology.task.install:The correct ceph version 19.2.3-678.ge911bdeb is installed. 2026-03-10T09:55:28.381 INFO:teuthology.task.install.util:Shipping valgrind.supp... 2026-03-10T09:55:28.381 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T09:55:28.381 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-10T09:55:28.412 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T09:55:28.412 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-10T09:55:28.442 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T09:55:28.442 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-10T09:55:28.471 INFO:teuthology.task.install.util:Shipping 'daemon-helper'... 2026-03-10T09:55:28.472 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T09:55:28.472 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/usr/bin/daemon-helper 2026-03-10T09:55:28.498 DEBUG:teuthology.orchestra.run.vm01:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-10T09:55:28.563 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T09:55:28.563 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/usr/bin/daemon-helper 2026-03-10T09:55:28.589 DEBUG:teuthology.orchestra.run.vm02:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-10T09:55:28.656 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T09:55:28.657 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/usr/bin/daemon-helper 2026-03-10T09:55:28.685 DEBUG:teuthology.orchestra.run.vm08:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-10T09:55:28.755 INFO:teuthology.task.install.util:Shipping 'adjust-ulimits'... 2026-03-10T09:55:28.755 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T09:55:28.755 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-10T09:55:28.780 DEBUG:teuthology.orchestra.run.vm01:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-10T09:55:28.846 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T09:55:28.846 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-10T09:55:28.871 DEBUG:teuthology.orchestra.run.vm02:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-10T09:55:28.936 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T09:55:28.936 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-10T09:55:28.959 DEBUG:teuthology.orchestra.run.vm08:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-10T09:55:29.026 INFO:teuthology.task.install.util:Shipping 'stdin-killer'... 2026-03-10T09:55:29.026 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T09:55:29.026 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/usr/bin/stdin-killer 2026-03-10T09:55:29.053 DEBUG:teuthology.orchestra.run.vm01:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-10T09:55:29.122 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T09:55:29.122 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/usr/bin/stdin-killer 2026-03-10T09:55:29.147 DEBUG:teuthology.orchestra.run.vm02:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-10T09:55:29.210 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T09:55:29.210 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/usr/bin/stdin-killer 2026-03-10T09:55:29.234 DEBUG:teuthology.orchestra.run.vm08:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-10T09:55:29.297 INFO:teuthology.run_tasks:Running task cephadm... 2026-03-10T09:55:29.342 INFO:tasks.cephadm:Config: {'conf': {'global': {'mon election default strategy': 1}, 'mgr': {'debug mgr': 20, 'debug ms': 1, 'mgr/cephadm/use_agent': False}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20}, 'osd': {'debug ms': 1, 'debug osd': 20, 'osd mclock iops capacity threshold hdd': 49000}}, 'flavor': 'default', 'log-ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', 'MON_DOWN', 'mons down', 'mon down', 'out of quorum', 'CEPHADM_STRAY_DAEMON'], 'log-only-match': ['CEPHADM_'], 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'} 2026-03-10T09:55:29.342 INFO:tasks.cephadm:Cluster image is quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T09:55:29.342 INFO:tasks.cephadm:Cluster fsid is 4533cc1c-1c67-11f1-85c0-e37e5114407d 2026-03-10T09:55:29.342 INFO:tasks.cephadm:Choosing monitor IPs and ports... 2026-03-10T09:55:29.342 INFO:tasks.cephadm:Monitor IPs: {'mon.a': '192.168.123.101', 'mon.b': '192.168.123.102', 'mon.c': '192.168.123.108'} 2026-03-10T09:55:29.342 INFO:tasks.cephadm:First mon is mon.a on vm01 2026-03-10T09:55:29.342 INFO:tasks.cephadm:First mgr is a 2026-03-10T09:55:29.342 INFO:tasks.cephadm:Normalizing hostnames... 2026-03-10T09:55:29.343 DEBUG:teuthology.orchestra.run.vm01:> sudo hostname $(hostname -s) 2026-03-10T09:55:29.368 DEBUG:teuthology.orchestra.run.vm02:> sudo hostname $(hostname -s) 2026-03-10T09:55:29.393 DEBUG:teuthology.orchestra.run.vm08:> sudo hostname $(hostname -s) 2026-03-10T09:55:29.424 INFO:tasks.cephadm:Downloading "compiled" cephadm from cachra 2026-03-10T09:55:29.424 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T09:55:30.087 INFO:tasks.cephadm:builder_project result: [{'url': 'https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/', 'chacra_url': 'https://3.chacra.ceph.com/repos/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/', 'ref': 'squid', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'distro': 'centos', 'distro_version': '9', 'distro_codename': None, 'modified': '2026-02-25 18:55:15.146628', 'status': 'ready', 'flavor': 'default', 'project': 'ceph', 'archs': ['source', 'x86_64'], 'extra': {'version': '19.2.3-678-ge911bdeb', 'package_manager_version': '19.2.3-678.ge911bdeb', 'build_url': 'https://jenkins.ceph.com/job/ceph-dev-pipeline/3275/', 'root_build_cause': '', 'node_name': '10.20.192.26+soko16', 'job_name': 'ceph-dev-pipeline'}}] 2026-03-10T09:55:30.686 INFO:tasks.util.chacra:got chacra host 3.chacra.ceph.com, ref squid, sha1 e911bdebe5c8faa3800735d1568fcdca65db60df from https://shaman.ceph.com/api/search/?project=ceph&distros=centos%2F9%2Fx86_64&flavor=default&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T09:55:30.687 INFO:tasks.cephadm:Discovered cachra url: https://3.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/x86_64/flavors/default/cephadm 2026-03-10T09:55:30.687 INFO:tasks.cephadm:Downloading cephadm from url: https://3.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/x86_64/flavors/default/cephadm 2026-03-10T09:55:30.687 DEBUG:teuthology.orchestra.run.vm01:> curl --silent -L https://3.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-10T09:55:32.008 INFO:teuthology.orchestra.run.vm01.stdout:-rw-r--r--. 1 ubuntu ubuntu 788355 Mar 10 09:55 /home/ubuntu/cephtest/cephadm 2026-03-10T09:55:32.008 DEBUG:teuthology.orchestra.run.vm02:> curl --silent -L https://3.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-10T09:55:33.372 INFO:teuthology.orchestra.run.vm02.stdout:-rw-r--r--. 1 ubuntu ubuntu 788355 Mar 10 09:55 /home/ubuntu/cephtest/cephadm 2026-03-10T09:55:33.372 DEBUG:teuthology.orchestra.run.vm08:> curl --silent -L https://3.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-10T09:55:34.797 INFO:teuthology.orchestra.run.vm08.stdout:-rw-r--r--. 1 ubuntu ubuntu 788355 Mar 10 09:55 /home/ubuntu/cephtest/cephadm 2026-03-10T09:55:34.797 DEBUG:teuthology.orchestra.run.vm01:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-10T09:55:34.813 DEBUG:teuthology.orchestra.run.vm02:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-10T09:55:34.830 DEBUG:teuthology.orchestra.run.vm08:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-10T09:55:34.853 INFO:tasks.cephadm:Pulling image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df on all hosts... 2026-03-10T09:55:34.853 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-10T09:55:34.856 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-10T09:55:34.873 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-10T09:55:35.100 INFO:teuthology.orchestra.run.vm02.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T09:55:35.105 INFO:teuthology.orchestra.run.vm01.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T09:55:35.106 INFO:teuthology.orchestra.run.vm08.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T09:56:07.651 INFO:teuthology.orchestra.run.vm01.stdout:{ 2026-03-10T09:56:07.651 INFO:teuthology.orchestra.run.vm01.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-10T09:56:07.651 INFO:teuthology.orchestra.run.vm01.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-10T09:56:07.651 INFO:teuthology.orchestra.run.vm01.stdout: "repo_digests": [ 2026-03-10T09:56:07.651 INFO:teuthology.orchestra.run.vm01.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-10T09:56:07.651 INFO:teuthology.orchestra.run.vm01.stdout: ] 2026-03-10T09:56:07.651 INFO:teuthology.orchestra.run.vm01.stdout:} 2026-03-10T09:56:32.216 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T09:56:32.216 INFO:teuthology.orchestra.run.vm02.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-10T09:56:32.216 INFO:teuthology.orchestra.run.vm02.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-10T09:56:32.216 INFO:teuthology.orchestra.run.vm02.stdout: "repo_digests": [ 2026-03-10T09:56:32.216 INFO:teuthology.orchestra.run.vm02.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-10T09:56:32.216 INFO:teuthology.orchestra.run.vm02.stdout: ] 2026-03-10T09:56:32.216 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T09:56:50.656 INFO:teuthology.orchestra.run.vm08.stdout:{ 2026-03-10T09:56:50.657 INFO:teuthology.orchestra.run.vm08.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-10T09:56:50.657 INFO:teuthology.orchestra.run.vm08.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-10T09:56:50.657 INFO:teuthology.orchestra.run.vm08.stdout: "repo_digests": [ 2026-03-10T09:56:50.657 INFO:teuthology.orchestra.run.vm08.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-10T09:56:50.657 INFO:teuthology.orchestra.run.vm08.stdout: ] 2026-03-10T09:56:50.657 INFO:teuthology.orchestra.run.vm08.stdout:} 2026-03-10T09:56:50.672 DEBUG:teuthology.orchestra.run.vm01:> sudo mkdir -p /etc/ceph 2026-03-10T09:56:50.698 DEBUG:teuthology.orchestra.run.vm02:> sudo mkdir -p /etc/ceph 2026-03-10T09:56:50.725 DEBUG:teuthology.orchestra.run.vm08:> sudo mkdir -p /etc/ceph 2026-03-10T09:56:50.750 DEBUG:teuthology.orchestra.run.vm01:> sudo chmod 777 /etc/ceph 2026-03-10T09:56:50.774 DEBUG:teuthology.orchestra.run.vm02:> sudo chmod 777 /etc/ceph 2026-03-10T09:56:50.796 DEBUG:teuthology.orchestra.run.vm08:> sudo chmod 777 /etc/ceph 2026-03-10T09:56:50.819 INFO:tasks.cephadm:Writing seed config... 2026-03-10T09:56:50.820 INFO:tasks.cephadm: override: [global] mon election default strategy = 1 2026-03-10T09:56:50.820 INFO:tasks.cephadm: override: [mgr] debug mgr = 20 2026-03-10T09:56:50.820 INFO:tasks.cephadm: override: [mgr] debug ms = 1 2026-03-10T09:56:50.820 INFO:tasks.cephadm: override: [mgr] mgr/cephadm/use_agent = False 2026-03-10T09:56:50.820 INFO:tasks.cephadm: override: [mon] debug mon = 20 2026-03-10T09:56:50.820 INFO:tasks.cephadm: override: [mon] debug ms = 1 2026-03-10T09:56:50.820 INFO:tasks.cephadm: override: [mon] debug paxos = 20 2026-03-10T09:56:50.820 INFO:tasks.cephadm: override: [osd] debug ms = 1 2026-03-10T09:56:50.820 INFO:tasks.cephadm: override: [osd] debug osd = 20 2026-03-10T09:56:50.820 INFO:tasks.cephadm: override: [osd] osd mclock iops capacity threshold hdd = 49000 2026-03-10T09:56:50.820 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T09:56:50.820 DEBUG:teuthology.orchestra.run.vm01:> dd of=/home/ubuntu/cephtest/seed.ceph.conf 2026-03-10T09:56:50.835 DEBUG:tasks.cephadm:Final config: [global] # make logging friendly to teuthology log_to_file = true log_to_stderr = false log to journald = false mon cluster log to file = true mon cluster log file level = debug mon clock drift allowed = 1.000 # replicate across OSDs, not hosts osd crush chooseleaf type = 0 #osd pool default size = 2 osd pool default erasure code profile = plugin=jerasure technique=reed_sol_van k=2 m=1 crush-failure-domain=osd # enable some debugging auth debug = true ms die on old message = true ms die on bug = true debug asserts on shutdown = true # adjust warnings mon max pg per osd = 10000# >= luminous mon pg warn max object skew = 0 mon osd allow primary affinity = true mon osd allow pg remap = true mon warn on legacy crush tunables = false mon warn on crush straw calc version zero = false mon warn on no sortbitwise = false mon warn on osd down out interval zero = false mon warn on too few osds = false mon_warn_on_pool_pg_num_not_power_of_two = false # disable pg_autoscaler by default for new pools osd_pool_default_pg_autoscale_mode = off # tests delete pools mon allow pool delete = true fsid = 4533cc1c-1c67-11f1-85c0-e37e5114407d mon election default strategy = 1 [osd] osd scrub load threshold = 5.0 osd scrub max interval = 600 osd mclock profile = high_recovery_ops osd recover clone overlap = true osd recovery max chunk = 1048576 osd deep scrub update digest min age = 30 osd map max advance = 10 osd memory target autotune = true # debugging osd debug shutdown = true osd debug op order = true osd debug verify stray on activate = true osd debug pg log writeout = true osd debug verify cached snaps = true osd debug verify missing on start = true osd debug misdirected ops = true osd op queue = debug_random osd op queue cut off = debug_random osd shutdown pgref assert = true bdev debug aio = true osd sloppy crc = true debug ms = 1 debug osd = 20 osd mclock iops capacity threshold hdd = 49000 [mgr] mon reweight min pgs per osd = 4 mon reweight min bytes per osd = 10 mgr/telemetry/nag = false debug mgr = 20 debug ms = 1 mgr/cephadm/use_agent = False [mon] mon data avail warn = 5 mon mgr mkfs grace = 240 mon reweight min pgs per osd = 4 mon osd reporter subtree level = osd mon osd prime pg temp = true mon reweight min bytes per osd = 10 # rotate auth tickets quickly to exercise renewal paths auth mon ticket ttl = 660# 11m auth service ticket ttl = 240# 4m # don't complain about global id reclaim mon_warn_on_insecure_global_id_reclaim = false mon_warn_on_insecure_global_id_reclaim_allowed = false debug mon = 20 debug ms = 1 debug paxos = 20 [client.rgw] rgw cache enabled = true rgw enable ops log = true rgw enable usage log = true 2026-03-10T09:56:50.836 DEBUG:teuthology.orchestra.run.vm01:mon.a> sudo journalctl -f -n 0 -u ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d@mon.a.service 2026-03-10T09:56:50.877 DEBUG:teuthology.orchestra.run.vm01:mgr.a> sudo journalctl -f -n 0 -u ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d@mgr.a.service 2026-03-10T09:56:50.919 INFO:tasks.cephadm:Bootstrapping... 2026-03-10T09:56:50.919 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df -v bootstrap --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id a --orphan-initial-daemons --skip-monitoring-stack --mon-ip 192.168.123.101 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring 2026-03-10T09:56:51.066 INFO:teuthology.orchestra.run.vm01.stdout:-------------------------------------------------------------------------------- 2026-03-10T09:56:51.066 INFO:teuthology.orchestra.run.vm01.stdout:cephadm ['--image', 'quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df', '-v', 'bootstrap', '--fsid', '4533cc1c-1c67-11f1-85c0-e37e5114407d', '--config', '/home/ubuntu/cephtest/seed.ceph.conf', '--output-config', '/etc/ceph/ceph.conf', '--output-keyring', '/etc/ceph/ceph.client.admin.keyring', '--output-pub-ssh-key', '/home/ubuntu/cephtest/ceph.pub', '--mon-id', 'a', '--mgr-id', 'a', '--orphan-initial-daemons', '--skip-monitoring-stack', '--mon-ip', '192.168.123.101', '--skip-admin-label'] 2026-03-10T09:56:51.067 INFO:teuthology.orchestra.run.vm01.stderr:Specifying an fsid for your cluster offers no advantages and may increase the likelihood of fsid conflicts. 2026-03-10T09:56:51.067 INFO:teuthology.orchestra.run.vm01.stdout:Verifying podman|docker is present... 2026-03-10T09:56:51.090 INFO:teuthology.orchestra.run.vm01.stdout:/bin/podman: stdout 5.8.0 2026-03-10T09:56:51.090 INFO:teuthology.orchestra.run.vm01.stdout:Verifying lvm2 is present... 2026-03-10T09:56:51.090 INFO:teuthology.orchestra.run.vm01.stdout:Verifying time synchronization is in place... 2026-03-10T09:56:51.097 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-10T09:56:51.097 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-10T09:56:51.102 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-10T09:56:51.102 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout inactive 2026-03-10T09:56:51.108 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout enabled 2026-03-10T09:56:51.114 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout active 2026-03-10T09:56:51.114 INFO:teuthology.orchestra.run.vm01.stdout:Unit chronyd.service is enabled and running 2026-03-10T09:56:51.114 INFO:teuthology.orchestra.run.vm01.stdout:Repeating the final host check... 2026-03-10T09:56:51.135 INFO:teuthology.orchestra.run.vm01.stdout:/bin/podman: stdout 5.8.0 2026-03-10T09:56:51.135 INFO:teuthology.orchestra.run.vm01.stdout:podman (/bin/podman) version 5.8.0 is present 2026-03-10T09:56:51.135 INFO:teuthology.orchestra.run.vm01.stdout:systemctl is present 2026-03-10T09:56:51.135 INFO:teuthology.orchestra.run.vm01.stdout:lvcreate is present 2026-03-10T09:56:51.141 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-10T09:56:51.141 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-10T09:56:51.146 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-10T09:56:51.146 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout inactive 2026-03-10T09:56:51.151 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout enabled 2026-03-10T09:56:51.156 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout active 2026-03-10T09:56:51.156 INFO:teuthology.orchestra.run.vm01.stdout:Unit chronyd.service is enabled and running 2026-03-10T09:56:51.156 INFO:teuthology.orchestra.run.vm01.stdout:Host looks OK 2026-03-10T09:56:51.156 INFO:teuthology.orchestra.run.vm01.stdout:Cluster fsid: 4533cc1c-1c67-11f1-85c0-e37e5114407d 2026-03-10T09:56:51.156 INFO:teuthology.orchestra.run.vm01.stdout:Acquiring lock 139984274964096 on /run/cephadm/4533cc1c-1c67-11f1-85c0-e37e5114407d.lock 2026-03-10T09:56:51.156 INFO:teuthology.orchestra.run.vm01.stdout:Lock 139984274964096 acquired on /run/cephadm/4533cc1c-1c67-11f1-85c0-e37e5114407d.lock 2026-03-10T09:56:51.157 INFO:teuthology.orchestra.run.vm01.stdout:Verifying IP 192.168.123.101 port 3300 ... 2026-03-10T09:56:51.157 INFO:teuthology.orchestra.run.vm01.stdout:Verifying IP 192.168.123.101 port 6789 ... 2026-03-10T09:56:51.157 INFO:teuthology.orchestra.run.vm01.stdout:Base mon IP(s) is [192.168.123.101:3300, 192.168.123.101:6789], mon addrv is [v2:192.168.123.101:3300,v1:192.168.123.101:6789] 2026-03-10T09:56:51.160 INFO:teuthology.orchestra.run.vm01.stdout:/sbin/ip: stdout default via 192.168.123.1 dev eth0 proto dhcp src 192.168.123.101 metric 100 2026-03-10T09:56:51.160 INFO:teuthology.orchestra.run.vm01.stdout:/sbin/ip: stdout 192.168.123.0/24 dev eth0 proto kernel scope link src 192.168.123.101 metric 100 2026-03-10T09:56:51.163 INFO:teuthology.orchestra.run.vm01.stdout:/sbin/ip: stdout ::1 dev lo proto kernel metric 256 pref medium 2026-03-10T09:56:51.163 INFO:teuthology.orchestra.run.vm01.stdout:/sbin/ip: stdout fe80::/64 dev eth0 proto kernel metric 1024 pref medium 2026-03-10T09:56:51.165 INFO:teuthology.orchestra.run.vm01.stdout:/sbin/ip: stdout 1: lo: mtu 65536 state UNKNOWN qlen 1000 2026-03-10T09:56:51.165 INFO:teuthology.orchestra.run.vm01.stdout:/sbin/ip: stdout inet6 ::1/128 scope host 2026-03-10T09:56:51.165 INFO:teuthology.orchestra.run.vm01.stdout:/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-10T09:56:51.165 INFO:teuthology.orchestra.run.vm01.stdout:/sbin/ip: stdout 2: eth0: mtu 1500 state UP qlen 1000 2026-03-10T09:56:51.165 INFO:teuthology.orchestra.run.vm01.stdout:/sbin/ip: stdout inet6 fe80::5055:ff:fe00:1/64 scope link noprefixroute 2026-03-10T09:56:51.165 INFO:teuthology.orchestra.run.vm01.stdout:/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-10T09:56:51.166 INFO:teuthology.orchestra.run.vm01.stdout:Mon IP `192.168.123.101` is in CIDR network `192.168.123.0/24` 2026-03-10T09:56:51.166 INFO:teuthology.orchestra.run.vm01.stdout:Mon IP `192.168.123.101` is in CIDR network `192.168.123.0/24` 2026-03-10T09:56:51.166 INFO:teuthology.orchestra.run.vm01.stdout:Inferred mon public CIDR from local network configuration ['192.168.123.0/24', '192.168.123.0/24'] 2026-03-10T09:56:51.166 INFO:teuthology.orchestra.run.vm01.stdout:Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network 2026-03-10T09:56:51.167 INFO:teuthology.orchestra.run.vm01.stdout:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T09:56:52.380 INFO:teuthology.orchestra.run.vm01.stdout:/bin/podman: stdout 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c 2026-03-10T09:56:52.380 INFO:teuthology.orchestra.run.vm01.stdout:/bin/podman: stderr Trying to pull quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T09:56:52.380 INFO:teuthology.orchestra.run.vm01.stdout:/bin/podman: stderr Getting image source signatures 2026-03-10T09:56:52.380 INFO:teuthology.orchestra.run.vm01.stdout:/bin/podman: stderr Copying blob sha256:1752b8d01aa0dd33bbe0ab24e8316174c94fbdcd5d26252e2680bba0624747a7 2026-03-10T09:56:52.380 INFO:teuthology.orchestra.run.vm01.stdout:/bin/podman: stderr Copying blob sha256:8e380faede39ebd4286247457b408d979ab568aafd8389c42ec304b8cfba4e92 2026-03-10T09:56:52.380 INFO:teuthology.orchestra.run.vm01.stdout:/bin/podman: stderr Copying config sha256:654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c 2026-03-10T09:56:52.380 INFO:teuthology.orchestra.run.vm01.stdout:/bin/podman: stderr Writing manifest to image destination 2026-03-10T09:56:52.649 INFO:teuthology.orchestra.run.vm01.stdout:ceph: stdout ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-10T09:56:52.649 INFO:teuthology.orchestra.run.vm01.stdout:Ceph version: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-10T09:56:52.649 INFO:teuthology.orchestra.run.vm01.stdout:Extracting ceph user uid/gid from container image... 2026-03-10T09:56:52.877 INFO:teuthology.orchestra.run.vm01.stdout:stat: stdout 167 167 2026-03-10T09:56:52.877 INFO:teuthology.orchestra.run.vm01.stdout:Creating initial keys... 2026-03-10T09:56:53.105 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-authtool: stdout AQDk6q9pnxO9ORAAIUMn5ierj+VKp95qvbxgKA== 2026-03-10T09:56:53.329 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-authtool: stdout AQDl6q9p0VGPCxAAUWucALIXQAJRPmX6zCSb1A== 2026-03-10T09:56:53.568 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-authtool: stdout AQDl6q9pozZiGRAAL/dTA5OAza09c70APXs6bw== 2026-03-10T09:56:53.568 INFO:teuthology.orchestra.run.vm01.stdout:Creating initial monmap... 2026-03-10T09:56:54.028 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-10T09:56:54.028 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/monmaptool: stdout setting min_mon_release = quincy 2026-03-10T09:56:54.028 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: set fsid to 4533cc1c-1c67-11f1-85c0-e37e5114407d 2026-03-10T09:56:54.028 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-10T09:56:54.028 INFO:teuthology.orchestra.run.vm01.stdout:monmaptool for a [v2:192.168.123.101:3300,v1:192.168.123.101:6789] on /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-10T09:56:54.028 INFO:teuthology.orchestra.run.vm01.stdout:setting min_mon_release = quincy 2026-03-10T09:56:54.028 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/monmaptool: set fsid to 4533cc1c-1c67-11f1-85c0-e37e5114407d 2026-03-10T09:56:54.028 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-10T09:56:54.028 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:56:54.028 INFO:teuthology.orchestra.run.vm01.stdout:Creating mon... 2026-03-10T09:56:54.279 INFO:teuthology.orchestra.run.vm01.stdout:create mon.a on 2026-03-10T09:56:54.448 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Removed "/etc/systemd/system/multi-user.target.wants/ceph.target". 2026-03-10T09:56:54.569 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /etc/systemd/system/ceph.target. 2026-03-10T09:56:54.698 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d.target → /etc/systemd/system/ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d.target. 2026-03-10T09:56:54.699 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph.target.wants/ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d.target → /etc/systemd/system/ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d.target. 2026-03-10T09:56:54.843 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d@mon.a 2026-03-10T09:56:54.843 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Failed to reset failed state of unit ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d@mon.a.service: Unit ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d@mon.a.service not loaded. 2026-03-10T09:56:54.983 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d.target.wants/ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d@mon.a.service → /etc/systemd/system/ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d@.service. 2026-03-10T09:56:55.117 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:55 vm01 podman[51506]: 2026-03-10 09:56:55.090079279 +0000 UTC m=+0.014907992 container create ce14701131fb913732f5eddb1640a0093bfed37b62ce4b89b0ecef89969f1f32 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mon-a, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, org.label-schema.schema-version=1.0) 2026-03-10T09:56:55.134 INFO:teuthology.orchestra.run.vm01.stdout:firewalld does not appear to be present 2026-03-10T09:56:55.134 INFO:teuthology.orchestra.run.vm01.stdout:Not possible to enable service . firewalld.service is not available 2026-03-10T09:56:55.134 INFO:teuthology.orchestra.run.vm01.stdout:Waiting for mon to start... 2026-03-10T09:56:55.134 INFO:teuthology.orchestra.run.vm01.stdout:Waiting for mon... 2026-03-10T09:56:55.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:55 vm01 podman[51506]: 2026-03-10 09:56:55.120228755 +0000 UTC m=+0.045057457 container init ce14701131fb913732f5eddb1640a0093bfed37b62ce4b89b0ecef89969f1f32 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mon-a, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , ceph=True, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-10T09:56:55.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:55 vm01 podman[51506]: 2026-03-10 09:56:55.123793947 +0000 UTC m=+0.048622660 container start ce14701131fb913732f5eddb1640a0093bfed37b62ce4b89b0ecef89969f1f32 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mon-a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-10T09:56:55.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:55 vm01 bash[51506]: ce14701131fb913732f5eddb1640a0093bfed37b62ce4b89b0ecef89969f1f32 2026-03-10T09:56:55.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:55 vm01 podman[51506]: 2026-03-10 09:56:55.084072533 +0000 UTC m=+0.008901257 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T09:56:55.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:55 vm01 systemd[1]: Started Ceph mon.a for 4533cc1c-1c67-11f1-85c0-e37e5114407d. 2026-03-10T09:56:55.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:55 vm01 ceph-mon[51543]: mkfs 4533cc1c-1c67-11f1-85c0-e37e5114407d 2026-03-10T09:56:55.426 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:55 vm01 ceph-mon[51543]: mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T09:56:55.430 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout cluster: 2026-03-10T09:56:55.430 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout id: 4533cc1c-1c67-11f1-85c0-e37e5114407d 2026-03-10T09:56:55.430 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout health: HEALTH_OK 2026-03-10T09:56:55.430 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-10T09:56:55.430 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout services: 2026-03-10T09:56:55.430 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon: 1 daemons, quorum a (age 0.142527s) 2026-03-10T09:56:55.430 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mgr: no daemons active 2026-03-10T09:56:55.430 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout osd: 0 osds: 0 up, 0 in 2026-03-10T09:56:55.430 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-10T09:56:55.430 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout data: 2026-03-10T09:56:55.430 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout pools: 0 pools, 0 pgs 2026-03-10T09:56:55.430 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout objects: 0 objects, 0 B 2026-03-10T09:56:55.430 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout usage: 0 B used, 0 B / 0 B avail 2026-03-10T09:56:55.430 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout pgs: 2026-03-10T09:56:55.430 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-10T09:56:55.430 INFO:teuthology.orchestra.run.vm01.stdout:mon is available 2026-03-10T09:56:55.430 INFO:teuthology.orchestra.run.vm01.stdout:Assimilating anything we can from ceph.conf... 2026-03-10T09:56:56.083 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-10T09:56:56.083 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout [global] 2026-03-10T09:56:56.083 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout fsid = 4533cc1c-1c67-11f1-85c0-e37e5114407d 2026-03-10T09:56:56.083 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-10T09:56:56.083 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.101:3300,v1:192.168.123.101:6789] 2026-03-10T09:56:56.083 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-10T09:56:56.083 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-10T09:56:56.083 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-10T09:56:56.083 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-10T09:56:56.083 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-10T09:56:56.083 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-10T09:56:56.083 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mgr/cephadm/use_agent = False 2026-03-10T09:56:56.083 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-10T09:56:56.083 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-10T09:56:56.083 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout [osd] 2026-03-10T09:56:56.083 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-10T09:56:56.083 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-10T09:56:56.083 INFO:teuthology.orchestra.run.vm01.stdout:Generating new minimal ceph.conf... 2026-03-10T09:56:56.250 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51543]: mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T09:56:56.250 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51543]: monmap epoch 1 2026-03-10T09:56:56.250 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51543]: fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d 2026-03-10T09:56:56.250 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51543]: last_changed 2026-03-10T09:56:53.660147+0000 2026-03-10T09:56:56.250 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51543]: created 2026-03-10T09:56:53.660147+0000 2026-03-10T09:56:56.250 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51543]: min_mon_release 19 (squid) 2026-03-10T09:56:56.250 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51543]: election_strategy: 1 2026-03-10T09:56:56.250 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51543]: 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-10T09:56:56.250 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51543]: fsmap 2026-03-10T09:56:56.250 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51543]: osdmap e1: 0 total, 0 up, 0 in 2026-03-10T09:56:56.250 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51543]: mgrmap e1: no daemons active 2026-03-10T09:56:56.250 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51543]: from='client.? 192.168.123.101:0/3205654489' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-10T09:56:56.250 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51543]: from='client.? 192.168.123.101:0/2458499503' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T09:56:56.250 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51543]: from='client.? 192.168.123.101:0/2458499503' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished 2026-03-10T09:56:56.387 INFO:teuthology.orchestra.run.vm01.stdout:Restarting the monitor... 2026-03-10T09:56:56.651 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 systemd[1]: Stopping Ceph mon.a for 4533cc1c-1c67-11f1-85c0-e37e5114407d... 2026-03-10T09:56:56.651 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mon-a[51517]: 2026-03-10T09:56:56.457+0000 7ff5c9303640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T09:56:56.651 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mon-a[51517]: 2026-03-10T09:56:56.457+0000 7ff5c9303640 -1 mon.a@0(leader) e1 *** Got Signal Terminated *** 2026-03-10T09:56:56.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 podman[51815]: 2026-03-10 09:56:56.651913232 +0000 UTC m=+0.208089812 container died ce14701131fb913732f5eddb1640a0093bfed37b62ce4b89b0ecef89969f1f32 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mon-a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-10T09:56:56.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 podman[51815]: 2026-03-10 09:56:56.765743415 +0000 UTC m=+0.321919995 container remove ce14701131fb913732f5eddb1640a0093bfed37b62ce4b89b0ecef89969f1f32 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mon-a, org.opencontainers.image.authors=Ceph Release Team , ceph=True, org.label-schema.build-date=20260223, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3) 2026-03-10T09:56:56.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 bash[51815]: ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mon-a 2026-03-10T09:56:56.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 systemd[1]: ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d@mon.a.service: Deactivated successfully. 2026-03-10T09:56:56.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 systemd[1]: Stopped Ceph mon.a for 4533cc1c-1c67-11f1-85c0-e37e5114407d. 2026-03-10T09:56:56.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 systemd[1]: Starting Ceph mon.a for 4533cc1c-1c67-11f1-85c0-e37e5114407d... 2026-03-10T09:56:56.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 podman[51895]: 2026-03-10 09:56:56.902089708 +0000 UTC m=+0.014814437 container create 4eaa42105425c83378dead67f28c30d54444b8cab9e462a3be67c7ad604626ca (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mon-a, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-10T09:56:56.946 INFO:teuthology.orchestra.run.vm01.stdout:Setting public_network to 192.168.123.0/24 in mon config section 2026-03-10T09:56:57.240 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 podman[51895]: 2026-03-10 09:56:56.934234292 +0000 UTC m=+0.046959031 container init 4eaa42105425c83378dead67f28c30d54444b8cab9e462a3be67c7ad604626ca (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mon-a, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-10T09:56:57.240 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 podman[51895]: 2026-03-10 09:56:56.938238334 +0000 UTC m=+0.050963063 container start 4eaa42105425c83378dead67f28c30d54444b8cab9e462a3be67c7ad604626ca (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mon-a, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223) 2026-03-10T09:56:57.240 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 bash[51895]: 4eaa42105425c83378dead67f28c30d54444b8cab9e462a3be67c7ad604626ca 2026-03-10T09:56:57.240 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 podman[51895]: 2026-03-10 09:56:56.896019504 +0000 UTC m=+0.008744243 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T09:56:57.240 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 systemd[1]: Started Ceph mon.a for 4533cc1c-1c67-11f1-85c0-e37e5114407d. 2026-03-10T09:56:57.240 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: set uid:gid to 167:167 (ceph:ceph) 2026-03-10T09:56:57.240 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 6 2026-03-10T09:56:57.240 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: pidfile_write: ignore empty --pid-file 2026-03-10T09:56:57.240 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: load: jerasure load: lrc 2026-03-10T09:56:57.240 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: RocksDB version: 7.9.2 2026-03-10T09:56:57.240 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Git sha 0 2026-03-10T09:56:57.240 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Compile date 2026-02-25 18:11:04 2026-03-10T09:56:57.240 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: DB SUMMARY 2026-03-10T09:56:57.240 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: DB Session ID: N38MY367SS8VG9LZLO6R 2026-03-10T09:56:57.240 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: CURRENT file: CURRENT 2026-03-10T09:56:57.240 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: IDENTITY file: IDENTITY 2026-03-10T09:56:57.240 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: MANIFEST file: MANIFEST-000010 size: 179 Bytes 2026-03-10T09:56:57.240 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 1, files: 000008.sst 2026-03-10T09:56:57.240 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 000009.log size: 86937 ; 2026-03-10T09:56:57.240 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.error_if_exists: 0 2026-03-10T09:56:57.240 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.create_if_missing: 0 2026-03-10T09:56:57.240 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.paranoid_checks: 1 2026-03-10T09:56:57.240 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.flush_verify_memtable_count: 1 2026-03-10T09:56:57.240 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T09:56:57.240 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-10T09:56:57.240 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.env: 0x5621bf4dadc0 2026-03-10T09:56:57.240 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.fs: PosixFileSystem 2026-03-10T09:56:57.240 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.info_log: 0x5621c146c700 2026-03-10T09:56:57.240 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.max_file_opening_threads: 16 2026-03-10T09:56:57.240 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.statistics: (nil) 2026-03-10T09:56:57.240 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.use_fsync: 0 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.max_log_file_size: 0 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.keep_log_file_num: 1000 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.recycle_log_file_num: 0 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.allow_fallocate: 1 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.allow_mmap_reads: 0 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.allow_mmap_writes: 0 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.use_direct_reads: 0 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.create_missing_column_families: 0 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.db_log_dir: 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.wal_dir: 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.advise_random_on_open: 1 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.db_write_buffer_size: 0 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.write_buffer_manager: 0x5621c1471900 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.rate_limiter: (nil) 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.wal_recovery_mode: 2 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.enable_thread_tracking: 0 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.enable_pipelined_write: 0 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.unordered_write: 0 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.row_cache: None 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.wal_filter: None 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.allow_ingest_behind: 0 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.two_write_queues: 0 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.manual_wal_flush: 0 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.wal_compression: 0 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.atomic_flush: 0 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.log_readahead_size: 0 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.best_efforts_recovery: 0 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.allow_data_in_errors: 0 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.db_host_id: __hostname__ 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.enforce_single_del_contracts: true 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.max_background_jobs: 2 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.max_background_compactions: -1 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.max_subcompactions: 1 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.max_total_wal_size: 0 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T09:56:57.241 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T09:56:57.242 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T09:56:57.242 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T09:56:57.242 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.max_open_files: -1 2026-03-10T09:56:57.242 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.bytes_per_sync: 0 2026-03-10T09:56:57.242 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T09:56:57.242 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T09:56:57.242 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.compaction_readahead_size: 0 2026-03-10T09:56:57.242 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.max_background_flushes: -1 2026-03-10T09:56:57.242 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Compression algorithms supported: 2026-03-10T09:56:57.242 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: kZSTD supported: 0 2026-03-10T09:56:57.242 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: kXpressCompression supported: 0 2026-03-10T09:56:57.242 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: kBZip2Compression supported: 0 2026-03-10T09:56:57.242 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T09:56:57.242 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: kLZ4Compression supported: 1 2026-03-10T09:56:57.242 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: kZlibCompression supported: 1 2026-03-10T09:56:57.242 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: kLZ4HCCompression supported: 1 2026-03-10T09:56:57.242 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: kSnappyCompression supported: 1 2026-03-10T09:56:57.242 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T09:56:57.242 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: DMutex implementation: pthread_mutex_t 2026-03-10T09:56:57.242 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 2026-03-10T09:56:57.242 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-10T09:56:57.242 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T09:56:57.242 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.merge_operator: 2026-03-10T09:56:57.242 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.compaction_filter: None 2026-03-10T09:56:57.242 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.compaction_filter_factory: None 2026-03-10T09:56:57.242 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.sst_partitioner_factory: None 2026-03-10T09:56:57.242 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T09:56:57.242 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T09:56:57.242 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5621c146c640) 2026-03-10T09:56:57.242 INFO:journalctl@ceph.mon.a.vm01.stdout: cache_index_and_filter_blocks: 1 2026-03-10T09:56:57.242 INFO:journalctl@ceph.mon.a.vm01.stdout: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout: pin_top_level_index_and_filter: 1 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout: index_type: 0 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout: data_block_index_type: 0 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout: index_shortening: 1 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout: data_block_hash_table_util_ratio: 0.750000 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout: checksum: 4 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout: no_block_cache: 0 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout: block_cache: 0x5621c1491350 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout: block_cache_name: BinnedLRUCache 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout: block_cache_options: 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout: capacity : 536870912 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout: num_shard_bits : 4 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout: strict_capacity_limit : 0 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout: high_pri_pool_ratio: 0.000 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout: block_cache_compressed: (nil) 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout: persistent_cache: (nil) 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout: block_size: 4096 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout: block_size_deviation: 10 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout: block_restart_interval: 16 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout: index_block_restart_interval: 1 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout: metadata_block_size: 4096 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout: partition_filters: 0 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout: use_delta_encoding: 1 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout: filter_policy: bloomfilter 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout: whole_key_filtering: 1 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout: verify_compression: 0 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout: read_amp_bytes_per_bit: 0 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout: format_version: 5 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout: enable_index_compression: 1 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout: block_align: 0 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout: max_auto_readahead_size: 262144 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout: prepopulate_block_cache: 0 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout: initial_auto_readahead_size: 8192 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout: num_file_reads_for_auto_readahead: 2 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.write_buffer_size: 33554432 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.max_write_buffer_number: 2 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.compression: NoCompression 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.bottommost_compression: Disabled 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.prefix_extractor: nullptr 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.num_levels: 7 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T09:56:57.243 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.compression_opts.level: 32767 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.compression_opts.strategy: 0 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.compression_opts.enabled: false 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.target_file_size_base: 67108864 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.arena_block_size: 1048576 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.disable_auto_compactions: 0 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.inplace_update_support: 0 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.bloom_locality: 0 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.max_successive_merges: 0 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.paranoid_file_checks: 0 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.force_consistency_checks: 1 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.report_bg_io_stats: 0 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.ttl: 2592000 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.enable_blob_files: false 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.min_blob_size: 0 2026-03-10T09:56:57.244 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.blob_file_size: 268435456 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.blob_file_starting_level: 0 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: c8286ca2-b3b9-40db-9dff-6ca2ec6223a2 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773136616960678, "job": 1, "event": "recovery_started", "wal_files": [9]} 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773136616962426, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 83898, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 245, "table_properties": {"data_size": 82064, "index_size": 223, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 581, "raw_key_size": 10134, "raw_average_key_size": 47, "raw_value_size": 76259, "raw_average_value_size": 359, "num_data_blocks": 10, "num_entries": 212, "num_filter_entries": 212, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773136616, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c8286ca2-b3b9-40db-9dff-6ca2ec6223a2", "db_session_id": "N38MY367SS8VG9LZLO6R", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}} 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773136616962476, "job": 1, "event": "recovery_finished"} 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: [db/version_set.cc:5047] Creating manifest 15 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-a/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5621c1492e00 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: rocksdb: DB pointer 0x5621c15a8000 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: starting mon.a rank 0 at public addrs [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] at bind addrs [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon_data /var/lib/ceph/mon/ceph-a fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: mon.a@-1(???) e1 preinit fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: mon.a@-1(???).mds e0 Unable to load 'last_metadata' 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: mon.a@-1(???).mds e0 Unable to load 'last_metadata' 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: mon.a@-1(???).mds e1 new map 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: mon.a@-1(???).mds e1 print_map 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout: e1 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout: btime 2026-03-10T09:56:55:158427+0000 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout: enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout: legacy client fscid: -1 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout: 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout: No filesystems configured 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: mon.a@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: mon.a@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: mon.a@-1(???).mgr e0 loading version 1 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: mon.a@-1(???).mgr e1 active server: (0) 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:56 vm01 ceph-mon[51930]: mon.a@-1(???).mgr e1 mkfs or daemon transitioned to available, loading commands 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:57 vm01 ceph-mon[51930]: mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:57 vm01 ceph-mon[51930]: monmap epoch 1 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:57 vm01 ceph-mon[51930]: fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:57 vm01 ceph-mon[51930]: last_changed 2026-03-10T09:56:53.660147+0000 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:57 vm01 ceph-mon[51930]: created 2026-03-10T09:56:53.660147+0000 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:57 vm01 ceph-mon[51930]: min_mon_release 19 (squid) 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:57 vm01 ceph-mon[51930]: election_strategy: 1 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:57 vm01 ceph-mon[51930]: 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:57 vm01 ceph-mon[51930]: fsmap 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:57 vm01 ceph-mon[51930]: osdmap e1: 0 total, 0 up, 0 in 2026-03-10T09:56:57.245 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:57 vm01 ceph-mon[51930]: mgrmap e1: no daemons active 2026-03-10T09:56:57.245 INFO:teuthology.orchestra.run.vm01.stdout:Wrote config to /etc/ceph/ceph.conf 2026-03-10T09:56:57.247 INFO:teuthology.orchestra.run.vm01.stdout:Wrote keyring to /etc/ceph/ceph.client.admin.keyring 2026-03-10T09:56:57.247 INFO:teuthology.orchestra.run.vm01.stdout:Creating mgr... 2026-03-10T09:56:57.247 INFO:teuthology.orchestra.run.vm01.stdout:Verifying port 0.0.0.0:9283 ... 2026-03-10T09:56:57.247 INFO:teuthology.orchestra.run.vm01.stdout:Verifying port 0.0.0.0:8765 ... 2026-03-10T09:56:57.383 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d@mgr.a 2026-03-10T09:56:57.383 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Failed to reset failed state of unit ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d@mgr.a.service: Unit ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d@mgr.a.service not loaded. 2026-03-10T09:56:57.501 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d.target.wants/ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d@mgr.a.service → /etc/systemd/system/ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d@.service. 2026-03-10T09:56:57.515 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:56:57 vm01 systemd[1]: Starting Ceph mgr.a for 4533cc1c-1c67-11f1-85c0-e37e5114407d... 2026-03-10T09:56:57.654 INFO:teuthology.orchestra.run.vm01.stdout:firewalld does not appear to be present 2026-03-10T09:56:57.654 INFO:teuthology.orchestra.run.vm01.stdout:Not possible to enable service . firewalld.service is not available 2026-03-10T09:56:57.654 INFO:teuthology.orchestra.run.vm01.stdout:firewalld does not appear to be present 2026-03-10T09:56:57.654 INFO:teuthology.orchestra.run.vm01.stdout:Not possible to open ports <[9283, 8765]>. firewalld.service is not available 2026-03-10T09:56:57.654 INFO:teuthology.orchestra.run.vm01.stdout:Waiting for mgr to start... 2026-03-10T09:56:57.654 INFO:teuthology.orchestra.run.vm01.stdout:Waiting for mgr... 2026-03-10T09:56:57.782 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:56:57 vm01 podman[52154]: 2026-03-10 09:56:57.599085717 +0000 UTC m=+0.014548410 container create 7a8fdbe7e3b9741c41450eacf3578fdb33a9b92da3bce14bc390dee10ccc89bc (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default) 2026-03-10T09:56:57.782 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:56:57 vm01 podman[52154]: 2026-03-10 09:56:57.641615269 +0000 UTC m=+0.057077952 container init 7a8fdbe7e3b9741c41450eacf3578fdb33a9b92da3bce14bc390dee10ccc89bc (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True) 2026-03-10T09:56:57.782 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:56:57 vm01 podman[52154]: 2026-03-10 09:56:57.646137651 +0000 UTC m=+0.061600344 container start 7a8fdbe7e3b9741c41450eacf3578fdb33a9b92da3bce14bc390dee10ccc89bc (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-10T09:56:57.782 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:56:57 vm01 bash[52154]: 7a8fdbe7e3b9741c41450eacf3578fdb33a9b92da3bce14bc390dee10ccc89bc 2026-03-10T09:56:57.782 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:56:57 vm01 podman[52154]: 2026-03-10 09:56:57.59276267 +0000 UTC m=+0.008225363 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T09:56:57.782 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:56:57 vm01 systemd[1]: Started Ceph mgr.a for 4533cc1c-1c67-11f1-85c0-e37e5114407d. 2026-03-10T09:56:57.782 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:56:57 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:56:57.740+0000 7fe9b480e140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T09:56:57.971 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-10T09:56:57.971 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout { 2026-03-10T09:56:57.971 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "fsid": "4533cc1c-1c67-11f1-85c0-e37e5114407d", 2026-03-10T09:56:57.971 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T09:56:57.971 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T09:56:57.971 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T09:56:57.971 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T09:56:57.971 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:56:57.971 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T09:56:57.971 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T09:56:57.971 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 0 2026-03-10T09:56:57.971 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-10T09:56:57.971 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T09:56:57.971 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "a" 2026-03-10T09:56:57.971 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-10T09:56:57.971 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum_age": 0, 2026-03-10T09:56:57.971 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T09:56:57.971 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T09:56:57.972 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T09:56:57.972 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T09:56:57.972 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:56:57.972 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T09:56:57.972 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T09:56:57.972 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T09:56:57.972 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T09:56:57.972 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T09:56:57.972 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T09:56:57.972 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T09:56:57.972 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T09:56:57.972 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:56:57.972 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T09:56:57.972 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T09:56:57.972 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T09:56:57.972 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T09:56:57.972 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T09:56:57.972 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T09:56:57.972 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T09:56:57.972 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T09:56:57.972 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T09:56:57.972 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:56:57.972 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T09:56:57.972 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T09:56:57.972 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T09:56:55:158427+0000", 2026-03-10T09:56:57.972 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T09:56:57.972 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T09:56:57.972 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:56:57.972 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T09:56:57.972 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-10T09:56:57.972 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T09:56:57.972 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T09:56:57.972 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T09:56:57.972 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T09:56:57.972 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T09:56:57.972 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-10T09:56:57.972 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T09:56:57.972 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:56:57.972 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T09:56:57.972 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T09:56:57.972 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T09:56:55.159799+0000", 2026-03-10T09:56:57.972 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T09:56:57.972 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:56:57.972 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T09:56:57.972 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout } 2026-03-10T09:56:57.972 INFO:teuthology.orchestra.run.vm01.stdout:mgr not available, waiting (1/15)... 2026-03-10T09:56:58.117 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:56:57 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:56:57.785+0000 7fe9b480e140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T09:56:58.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:58 vm01 ceph-mon[51930]: from='client.? 192.168.123.101:0/4087453228' entity='client.admin' 2026-03-10T09:56:58.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:56:58 vm01 ceph-mon[51930]: from='client.? 192.168.123.101:0/47960258' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T09:56:58.429 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:56:58 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:56:58.170+0000 7fe9b480e140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T09:56:58.929 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:56:58 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:56:58.466+0000 7fe9b480e140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T09:56:58.929 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:56:58 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T09:56:58.929 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:56:58 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T09:56:58.929 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:56:58 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: from numpy import show_config as show_numpy_config 2026-03-10T09:56:58.929 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:56:58 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:56:58.545+0000 7fe9b480e140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T09:56:58.929 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:56:58 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:56:58.579+0000 7fe9b480e140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T09:56:58.929 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:56:58 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:56:58.644+0000 7fe9b480e140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T09:56:59.429 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:56:59 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:56:59.102+0000 7fe9b480e140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T09:56:59.429 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:56:59 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:56:59.204+0000 7fe9b480e140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T09:56:59.429 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:56:59 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:56:59.240+0000 7fe9b480e140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T09:56:59.429 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:56:59 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:56:59.271+0000 7fe9b480e140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T09:56:59.429 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:56:59 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:56:59.309+0000 7fe9b480e140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T09:56:59.429 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:56:59 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:56:59.343+0000 7fe9b480e140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T09:56:59.754 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:56:59 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:56:59.499+0000 7fe9b480e140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T09:56:59.754 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:56:59 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:56:59.546+0000 7fe9b480e140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T09:56:59.754 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:56:59 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:56:59.753+0000 7fe9b480e140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T09:57:00.255 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:00 vm01 ceph-mon[51930]: from='client.? 192.168.123.101:0/700166012' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T09:57:00.255 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:00 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:00.044+0000 7fe9b480e140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T09:57:00.255 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:00 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:00.094+0000 7fe9b480e140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T09:57:00.255 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:00 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:00.138+0000 7fe9b480e140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T09:57:00.255 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:00 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:00.220+0000 7fe9b480e140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T09:57:00.255 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:00 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:00.255+0000 7fe9b480e140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T09:57:00.300 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout { 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "fsid": "4533cc1c-1c67-11f1-85c0-e37e5114407d", 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 0 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "a" 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum_age": 3, 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T09:56:55:158427+0000", 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T09:56:55.159799+0000", 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T09:57:00.301 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:57:00.302 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T09:57:00.302 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout } 2026-03-10T09:57:00.302 INFO:teuthology.orchestra.run.vm01.stdout:mgr not available, waiting (2/15)... 2026-03-10T09:57:00.573 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:00 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:00.340+0000 7fe9b480e140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T09:57:00.573 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:00 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:00.444+0000 7fe9b480e140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T09:57:00.573 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:00 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:00.572+0000 7fe9b480e140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T09:57:00.929 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:00 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:00.607+0000 7fe9b480e140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T09:57:01.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:01 vm01 ceph-mon[51930]: Activating manager daemon a 2026-03-10T09:57:01.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:01 vm01 ceph-mon[51930]: mgrmap e2: a(active, starting, since 0.00464264s) 2026-03-10T09:57:01.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:01 vm01 ceph-mon[51930]: from='mgr.14100 192.168.123.101:0/4285148820' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T09:57:01.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:01 vm01 ceph-mon[51930]: from='mgr.14100 192.168.123.101:0/4285148820' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T09:57:01.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:01 vm01 ceph-mon[51930]: from='mgr.14100 192.168.123.101:0/4285148820' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T09:57:01.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:01 vm01 ceph-mon[51930]: from='mgr.14100 192.168.123.101:0/4285148820' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T09:57:01.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:01 vm01 ceph-mon[51930]: from='mgr.14100 192.168.123.101:0/4285148820' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T09:57:01.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:01 vm01 ceph-mon[51930]: Manager daemon a is now available 2026-03-10T09:57:01.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:01 vm01 ceph-mon[51930]: from='mgr.14100 192.168.123.101:0/4285148820' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T09:57:01.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:01 vm01 ceph-mon[51930]: from='mgr.14100 192.168.123.101:0/4285148820' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T09:57:01.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:01 vm01 ceph-mon[51930]: from='mgr.14100 192.168.123.101:0/4285148820' entity='mgr.a' 2026-03-10T09:57:01.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:01 vm01 ceph-mon[51930]: from='mgr.14100 192.168.123.101:0/4285148820' entity='mgr.a' 2026-03-10T09:57:01.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:01 vm01 ceph-mon[51930]: from='mgr.14100 192.168.123.101:0/4285148820' entity='mgr.a' 2026-03-10T09:57:02.688 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-10T09:57:02.688 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout { 2026-03-10T09:57:02.688 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "fsid": "4533cc1c-1c67-11f1-85c0-e37e5114407d", 2026-03-10T09:57:02.688 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T09:57:02.688 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T09:57:02.688 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T09:57:02.688 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T09:57:02.688 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:57:02.688 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T09:57:02.688 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T09:57:02.688 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 0 2026-03-10T09:57:02.688 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-10T09:57:02.688 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T09:57:02.688 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "a" 2026-03-10T09:57:02.688 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-10T09:57:02.688 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum_age": 5, 2026-03-10T09:57:02.688 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T09:57:02.688 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T09:57:02.688 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T09:57:02.688 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T09:57:02.688 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:57:02.688 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T09:57:02.688 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T09:57:02.688 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T09:57:02.688 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T09:57:02.688 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T09:57:02.688 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T09:57:02.688 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T09:57:02.688 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T09:57:02.688 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:57:02.688 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T09:57:02.688 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T09:57:02.688 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T09:57:02.688 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T09:57:02.688 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T09:57:02.688 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T09:57:02.688 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T09:57:02.688 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T09:57:02.688 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T09:57:02.689 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:57:02.689 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T09:57:02.689 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T09:57:02.689 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T09:56:55:158427+0000", 2026-03-10T09:57:02.689 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T09:57:02.689 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T09:57:02.689 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:57:02.689 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T09:57:02.689 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T09:57:02.689 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T09:57:02.689 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T09:57:02.689 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T09:57:02.689 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T09:57:02.689 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T09:57:02.689 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-10T09:57:02.689 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T09:57:02.689 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:57:02.689 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T09:57:02.689 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T09:57:02.689 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T09:56:55.159799+0000", 2026-03-10T09:57:02.689 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T09:57:02.689 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:57:02.689 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T09:57:02.689 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout } 2026-03-10T09:57:02.689 INFO:teuthology.orchestra.run.vm01.stdout:mgr is available 2026-03-10T09:57:02.908 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:02 vm01 ceph-mon[51930]: mgrmap e3: a(active, since 1.00936s) 2026-03-10T09:57:02.908 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:02 vm01 ceph-mon[51930]: from='client.? 192.168.123.101:0/1087518431' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T09:57:03.038 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-10T09:57:03.039 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout [global] 2026-03-10T09:57:03.039 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout fsid = 4533cc1c-1c67-11f1-85c0-e37e5114407d 2026-03-10T09:57:03.039 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-10T09:57:03.039 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.101:3300,v1:192.168.123.101:6789] 2026-03-10T09:57:03.039 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-10T09:57:03.039 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-10T09:57:03.039 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-10T09:57:03.039 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-10T09:57:03.039 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-10T09:57:03.039 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-10T09:57:03.039 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-10T09:57:03.039 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-10T09:57:03.039 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout [osd] 2026-03-10T09:57:03.039 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-10T09:57:03.039 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-10T09:57:03.039 INFO:teuthology.orchestra.run.vm01.stdout:Enabling cephadm module... 2026-03-10T09:57:03.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:03 vm01 ceph-mon[51930]: mgrmap e4: a(active, since 2s) 2026-03-10T09:57:03.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:03 vm01 ceph-mon[51930]: from='client.? 192.168.123.101:0/953795590' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T09:57:03.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:03 vm01 ceph-mon[51930]: from='client.? 192.168.123.101:0/953795590' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished 2026-03-10T09:57:03.923 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:03 vm01 ceph-mon[51930]: from='client.? 192.168.123.101:0/2138066393' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-10T09:57:04.179 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:03 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: ignoring --setuser ceph since I am not root 2026-03-10T09:57:04.179 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:03 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: ignoring --setgroup ceph since I am not root 2026-03-10T09:57:04.179 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:04 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:04.027+0000 7fcf6adb3140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T09:57:04.179 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:04 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:04.081+0000 7fcf6adb3140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T09:57:04.468 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout { 2026-03-10T09:57:04.468 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 5, 2026-03-10T09:57:04.468 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T09:57:04.468 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "active_name": "a", 2026-03-10T09:57:04.468 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-10T09:57:04.468 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout } 2026-03-10T09:57:04.468 INFO:teuthology.orchestra.run.vm01.stdout:Waiting for the mgr to restart... 2026-03-10T09:57:04.468 INFO:teuthology.orchestra.run.vm01.stdout:Waiting for mgr epoch 5... 2026-03-10T09:57:04.882 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:04 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:04.530+0000 7fcf6adb3140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T09:57:04.882 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:04 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:04.882+0000 7fcf6adb3140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T09:57:05.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:04 vm01 ceph-mon[51930]: from='client.? 192.168.123.101:0/2138066393' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-10T09:57:05.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:04 vm01 ceph-mon[51930]: mgrmap e5: a(active, since 3s) 2026-03-10T09:57:05.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:04 vm01 ceph-mon[51930]: from='client.? 192.168.123.101:0/3207189997' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T09:57:05.179 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:04 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T09:57:05.179 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:04 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T09:57:05.179 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:04 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: from numpy import show_config as show_numpy_config 2026-03-10T09:57:05.179 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:04 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:04.970+0000 7fcf6adb3140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T09:57:05.179 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:05 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:05.006+0000 7fcf6adb3140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T09:57:05.179 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:05 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:05.077+0000 7fcf6adb3140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T09:57:05.845 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:05 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:05.576+0000 7fcf6adb3140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T09:57:05.845 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:05 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:05.691+0000 7fcf6adb3140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T09:57:05.845 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:05 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:05.729+0000 7fcf6adb3140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T09:57:05.845 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:05 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:05.764+0000 7fcf6adb3140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T09:57:05.845 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:05 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:05.806+0000 7fcf6adb3140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T09:57:05.845 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:05 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:05.845+0000 7fcf6adb3140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T09:57:06.179 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:06 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:06.015+0000 7fcf6adb3140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T09:57:06.179 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:06 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:06.064+0000 7fcf6adb3140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T09:57:06.550 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:06 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:06.279+0000 7fcf6adb3140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T09:57:06.550 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:06 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:06.550+0000 7fcf6adb3140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T09:57:06.813 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:06 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:06.586+0000 7fcf6adb3140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T09:57:06.813 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:06 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:06.627+0000 7fcf6adb3140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T09:57:06.813 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:06 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:06.701+0000 7fcf6adb3140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T09:57:06.813 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:06 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:06.737+0000 7fcf6adb3140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T09:57:07.096 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:06 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:06.813+0000 7fcf6adb3140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T09:57:07.096 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:06 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:06.922+0000 7fcf6adb3140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T09:57:07.096 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:07 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:07.058+0000 7fcf6adb3140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T09:57:07.429 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:07 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:07.096+0000 7fcf6adb3140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T09:57:07.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:07 vm01 ceph-mon[51930]: Active manager daemon a restarted 2026-03-10T09:57:07.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:07 vm01 ceph-mon[51930]: Activating manager daemon a 2026-03-10T09:57:07.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:07 vm01 ceph-mon[51930]: osdmap e2: 0 total, 0 up, 0 in 2026-03-10T09:57:07.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:07 vm01 ceph-mon[51930]: mgrmap e6: a(active, starting, since 0.00512457s) 2026-03-10T09:57:07.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:07 vm01 ceph-mon[51930]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T09:57:07.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:07 vm01 ceph-mon[51930]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T09:57:07.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:07 vm01 ceph-mon[51930]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T09:57:07.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:07 vm01 ceph-mon[51930]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T09:57:07.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:07 vm01 ceph-mon[51930]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T09:57:07.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:07 vm01 ceph-mon[51930]: Manager daemon a is now available 2026-03-10T09:57:07.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:07 vm01 ceph-mon[51930]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' 2026-03-10T09:57:07.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:07 vm01 ceph-mon[51930]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' 2026-03-10T09:57:07.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:07 vm01 ceph-mon[51930]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:57:07.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:07 vm01 ceph-mon[51930]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:57:07.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:07 vm01 ceph-mon[51930]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T09:57:07.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:07 vm01 ceph-mon[51930]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T09:57:08.244 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout { 2026-03-10T09:57:08.244 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 7, 2026-03-10T09:57:08.244 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-10T09:57:08.244 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout } 2026-03-10T09:57:08.244 INFO:teuthology.orchestra.run.vm01.stdout:mgr epoch 5 is available 2026-03-10T09:57:08.244 INFO:teuthology.orchestra.run.vm01.stdout:Setting orchestrator backend to cephadm... 2026-03-10T09:57:08.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:08 vm01 ceph-mon[51930]: Found migration_current of "None". Setting to last migration. 2026-03-10T09:57:08.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:08 vm01 ceph-mon[51930]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' 2026-03-10T09:57:08.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:08 vm01 ceph-mon[51930]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' 2026-03-10T09:57:08.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:08 vm01 ceph-mon[51930]: mgrmap e7: a(active, since 1.0101s) 2026-03-10T09:57:08.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:08 vm01 ceph-mon[51930]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' 2026-03-10T09:57:08.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:08 vm01 ceph-mon[51930]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:57:09.034 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout value unchanged 2026-03-10T09:57:09.034 INFO:teuthology.orchestra.run.vm01.stdout:Generating ssh key... 2026-03-10T09:57:09.545 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:09 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: Generating public/private ed25519 key pair. 2026-03-10T09:57:09.545 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:09 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: Your identification has been saved in /tmp/tmp0fc2oq32/key 2026-03-10T09:57:09.545 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:09 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: Your public key has been saved in /tmp/tmp0fc2oq32/key.pub 2026-03-10T09:57:09.545 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:09 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: The key fingerprint is: 2026-03-10T09:57:09.545 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:09 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: SHA256:3vWDQh5+T4v3zglfhg4n6SnRLYQdB5mYgieOG1v5gPE ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d 2026-03-10T09:57:09.545 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:09 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: The key's randomart image is: 2026-03-10T09:57:09.545 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:09 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: +--[ED25519 256]--+ 2026-03-10T09:57:09.545 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:09 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: | . o.+ | 2026-03-10T09:57:09.545 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:09 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: | . o o o + . | 2026-03-10T09:57:09.546 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:09 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: | * + . o o | 2026-03-10T09:57:09.546 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:09 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: | + E . o | 2026-03-10T09:57:09.546 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:09 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: | = oS oo.. | 2026-03-10T09:57:09.546 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:09 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: | o ..=.oo+.. | 2026-03-10T09:57:09.546 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:09 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: | . =.*o* o| 2026-03-10T09:57:09.546 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:09 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: | .+ X+=o| 2026-03-10T09:57:09.546 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:09 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: | .+.+=+| 2026-03-10T09:57:09.546 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:09 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: +----[SHA256]-----+ 2026-03-10T09:57:09.799 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:09 vm01 ceph-mon[51930]: from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T09:57:09.799 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:09 vm01 ceph-mon[51930]: from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T09:57:09.799 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:09 vm01 ceph-mon[51930]: from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:57:09.799 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:09 vm01 ceph-mon[51930]: [10/Mar/2026:09:57:08] ENGINE Bus STARTING 2026-03-10T09:57:09.799 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:09 vm01 ceph-mon[51930]: [10/Mar/2026:09:57:08] ENGINE Serving on http://192.168.123.101:8765 2026-03-10T09:57:09.799 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:09 vm01 ceph-mon[51930]: [10/Mar/2026:09:57:08] ENGINE Serving on https://192.168.123.101:7150 2026-03-10T09:57:09.799 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:09 vm01 ceph-mon[51930]: [10/Mar/2026:09:57:08] ENGINE Bus STARTED 2026-03-10T09:57:09.799 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:09 vm01 ceph-mon[51930]: [10/Mar/2026:09:57:08] ENGINE Client ('192.168.123.101', 43166) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T09:57:09.800 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:09 vm01 ceph-mon[51930]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:57:09.800 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:09 vm01 ceph-mon[51930]: from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:57:09.800 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:09 vm01 ceph-mon[51930]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' 2026-03-10T09:57:09.800 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:09 vm01 ceph-mon[51930]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' 2026-03-10T09:57:09.803 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIORtGJi8T3ZnVUbCZDMVuz+dgV0u/IBSt3etz434OnxS ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d 2026-03-10T09:57:09.804 INFO:teuthology.orchestra.run.vm01.stdout:Wrote public SSH key to /home/ubuntu/cephtest/ceph.pub 2026-03-10T09:57:09.805 INFO:teuthology.orchestra.run.vm01.stdout:Adding key to root@localhost authorized_keys... 2026-03-10T09:57:09.805 INFO:teuthology.orchestra.run.vm01.stdout:Adding host vm01... 2026-03-10T09:57:10.852 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:10 vm01 ceph-mon[51930]: from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:57:10.852 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:10 vm01 ceph-mon[51930]: Generating ssh key... 2026-03-10T09:57:10.852 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:10 vm01 ceph-mon[51930]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:57:10.852 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:10 vm01 ceph-mon[51930]: mgrmap e8: a(active, since 2s) 2026-03-10T09:57:11.756 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout Added host 'vm01' with addr '192.168.123.101' 2026-03-10T09:57:11.756 INFO:teuthology.orchestra.run.vm01.stdout:Deploying unmanaged mon service... 2026-03-10T09:57:11.874 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:11 vm01 ceph-mon[51930]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm01", "addr": "192.168.123.101", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:57:11.874 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:11 vm01 ceph-mon[51930]: Deploying cephadm binary to vm01 2026-03-10T09:57:11.874 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:11 vm01 ceph-mon[51930]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' 2026-03-10T09:57:11.874 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:11 vm01 ceph-mon[51930]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:57:12.160 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout Scheduled mon update... 2026-03-10T09:57:12.160 INFO:teuthology.orchestra.run.vm01.stdout:Deploying unmanaged mgr service... 2026-03-10T09:57:12.557 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout Scheduled mgr update... 2026-03-10T09:57:12.992 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:12 vm01 ceph-mon[51930]: Added host vm01 2026-03-10T09:57:12.992 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:12 vm01 ceph-mon[51930]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' 2026-03-10T09:57:12.992 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:12 vm01 ceph-mon[51930]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' 2026-03-10T09:57:13.417 INFO:teuthology.orchestra.run.vm01.stdout:Enabling the dashboard module... 2026-03-10T09:57:13.806 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:13 vm01 ceph-mon[51930]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:57:13.806 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:13 vm01 ceph-mon[51930]: Saving service mon spec with placement count:5 2026-03-10T09:57:13.806 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:13 vm01 ceph-mon[51930]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:57:13.806 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:13 vm01 ceph-mon[51930]: Saving service mgr spec with placement count:2 2026-03-10T09:57:13.806 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:13 vm01 ceph-mon[51930]: from='client.? 192.168.123.101:0/4185460240' entity='client.admin' 2026-03-10T09:57:13.806 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:13 vm01 ceph-mon[51930]: from='client.? 192.168.123.101:0/1437623875' entity='client.admin' 2026-03-10T09:57:13.806 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:13 vm01 ceph-mon[51930]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' 2026-03-10T09:57:13.806 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:13 vm01 ceph-mon[51930]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' 2026-03-10T09:57:13.806 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:13 vm01 ceph-mon[51930]: from='client.? 192.168.123.101:0/1296160380' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-10T09:57:14.916 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:14 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: ignoring --setuser ceph since I am not root 2026-03-10T09:57:14.916 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:14 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: ignoring --setgroup ceph since I am not root 2026-03-10T09:57:14.916 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:14 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:14.832+0000 7fc23c645140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T09:57:14.916 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:14 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:14.889+0000 7fc23c645140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T09:57:15.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:15 vm01 ceph-mon[51930]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' 2026-03-10T09:57:15.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:15 vm01 ceph-mon[51930]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' 2026-03-10T09:57:15.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:15 vm01 ceph-mon[51930]: from='client.? 192.168.123.101:0/1296160380' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-10T09:57:15.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:15 vm01 ceph-mon[51930]: mgrmap e9: a(active, since 7s) 2026-03-10T09:57:15.329 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout { 2026-03-10T09:57:15.330 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 9, 2026-03-10T09:57:15.330 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T09:57:15.330 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "active_name": "a", 2026-03-10T09:57:15.330 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-10T09:57:15.330 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout } 2026-03-10T09:57:15.330 INFO:teuthology.orchestra.run.vm01.stdout:Waiting for the mgr to restart... 2026-03-10T09:57:15.330 INFO:teuthology.orchestra.run.vm01.stdout:Waiting for mgr epoch 9... 2026-03-10T09:57:15.445 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:15 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:15.372+0000 7fc23c645140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T09:57:16.097 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:15 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:15.732+0000 7fc23c645140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T09:57:16.097 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:15 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T09:57:16.097 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:15 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T09:57:16.097 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:15 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: from numpy import show_config as show_numpy_config 2026-03-10T09:57:16.097 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:15 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:15.826+0000 7fc23c645140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T09:57:16.097 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:15 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:15.862+0000 7fc23c645140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T09:57:16.097 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:15 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:15.939+0000 7fc23c645140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T09:57:16.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:16 vm01 ceph-mon[51930]: from='client.? 192.168.123.101:0/2935992572' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T09:57:16.753 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:16 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:16.471+0000 7fc23c645140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T09:57:16.753 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:16 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:16.587+0000 7fc23c645140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T09:57:16.753 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:16 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:16.629+0000 7fc23c645140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T09:57:16.753 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:16 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:16.665+0000 7fc23c645140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T09:57:16.753 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:16 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:16.709+0000 7fc23c645140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T09:57:17.179 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:16 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:16.752+0000 7fc23c645140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T09:57:17.179 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:16 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:16.941+0000 7fc23c645140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T09:57:17.179 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:16 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:16.995+0000 7fc23c645140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T09:57:17.548 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:17 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:17.226+0000 7fc23c645140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T09:57:17.872 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:17 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:17.548+0000 7fc23c645140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T09:57:17.872 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:17 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:17.592+0000 7fc23c645140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T09:57:17.872 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:17 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:17.646+0000 7fc23c645140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T09:57:17.872 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:17 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:17.738+0000 7fc23c645140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T09:57:17.872 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:17 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:17.785+0000 7fc23c645140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T09:57:18.140 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:17 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:17.872+0000 7fc23c645140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T09:57:18.140 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:17 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:17.991+0000 7fc23c645140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T09:57:18.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:18 vm01 ceph-mon[51930]: Active manager daemon a restarted 2026-03-10T09:57:18.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:18 vm01 ceph-mon[51930]: Activating manager daemon a 2026-03-10T09:57:18.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:18 vm01 ceph-mon[51930]: osdmap e3: 0 total, 0 up, 0 in 2026-03-10T09:57:18.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:18 vm01 ceph-mon[51930]: mgrmap e10: a(active, starting, since 0.00643441s) 2026-03-10T09:57:18.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:18 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T09:57:18.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:18 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T09:57:18.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:18 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T09:57:18.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:18 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T09:57:18.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:18 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T09:57:18.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:18 vm01 ceph-mon[51930]: Manager daemon a is now available 2026-03-10T09:57:18.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:18 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:57:18.429 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:18 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:18.140+0000 7fc23c645140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T09:57:18.429 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:18 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:18.181+0000 7fc23c645140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T09:57:19.353 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout { 2026-03-10T09:57:19.354 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 11, 2026-03-10T09:57:19.354 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-10T09:57:19.354 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout } 2026-03-10T09:57:19.354 INFO:teuthology.orchestra.run.vm01.stdout:mgr epoch 9 is available 2026-03-10T09:57:19.354 INFO:teuthology.orchestra.run.vm01.stdout:Generating a dashboard self-signed certificate... 2026-03-10T09:57:19.459 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:19 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T09:57:19.459 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:19 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T09:57:19.459 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:19 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:19.459 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:19 vm01 ceph-mon[51930]: mgrmap e11: a(active, since 1.01023s) 2026-03-10T09:57:19.845 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout Self-signed certificate created 2026-03-10T09:57:19.845 INFO:teuthology.orchestra.run.vm01.stdout:Creating initial admin user... 2026-03-10T09:57:20.382 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout {"username": "admin", "password": "$2b$12$t/ZkorTf4bgsHphMItWGseIG/ulMsMez6r6YDsla.7RLyWRUFxCfW", "roles": ["administrator"], "name": null, "email": null, "lastUpdate": 1773136640, "enabled": true, "pwdExpirationDate": null, "pwdUpdateRequired": true} 2026-03-10T09:57:20.382 INFO:teuthology.orchestra.run.vm01.stdout:Fetching dashboard port number... 2026-03-10T09:57:20.501 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:20 vm01 ceph-mon[51930]: [10/Mar/2026:09:57:19] ENGINE Bus STARTING 2026-03-10T09:57:20.501 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:20 vm01 ceph-mon[51930]: [10/Mar/2026:09:57:19] ENGINE Serving on https://192.168.123.101:7150 2026-03-10T09:57:20.501 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:20 vm01 ceph-mon[51930]: [10/Mar/2026:09:57:19] ENGINE Client ('192.168.123.101', 51966) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T09:57:20.501 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:20 vm01 ceph-mon[51930]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T09:57:20.501 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:20 vm01 ceph-mon[51930]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T09:57:20.501 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:20 vm01 ceph-mon[51930]: [10/Mar/2026:09:57:19] ENGINE Serving on http://192.168.123.101:8765 2026-03-10T09:57:20.501 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:20 vm01 ceph-mon[51930]: [10/Mar/2026:09:57:19] ENGINE Bus STARTED 2026-03-10T09:57:20.501 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:20 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:20.501 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:20 vm01 ceph-mon[51930]: from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:57:20.501 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:20 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:20.501 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:20 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:20.501 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:20 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:20.765 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 8443 2026-03-10T09:57:20.765 INFO:teuthology.orchestra.run.vm01.stdout:firewalld does not appear to be present 2026-03-10T09:57:20.765 INFO:teuthology.orchestra.run.vm01.stdout:Not possible to open ports <[8443]>. firewalld.service is not available 2026-03-10T09:57:20.766 INFO:teuthology.orchestra.run.vm01.stdout:Ceph Dashboard is now available at: 2026-03-10T09:57:20.766 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:57:20.766 INFO:teuthology.orchestra.run.vm01.stdout: URL: https://vm01.local:8443/ 2026-03-10T09:57:20.766 INFO:teuthology.orchestra.run.vm01.stdout: User: admin 2026-03-10T09:57:20.766 INFO:teuthology.orchestra.run.vm01.stdout: Password: ox5y6vstza 2026-03-10T09:57:20.766 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:57:20.766 INFO:teuthology.orchestra.run.vm01.stdout:Saving cluster configuration to /var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/config directory 2026-03-10T09:57:21.211 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr set mgr/dashboard/cluster/status 2026-03-10T09:57:21.211 INFO:teuthology.orchestra.run.vm01.stdout:You can access the Ceph CLI as following in case of multi-cluster or non-default config: 2026-03-10T09:57:21.211 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:57:21.211 INFO:teuthology.orchestra.run.vm01.stdout: sudo /home/ubuntu/cephtest/cephadm shell --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring 2026-03-10T09:57:21.211 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:57:21.211 INFO:teuthology.orchestra.run.vm01.stdout:Or, if you are only running a single cluster on this host: 2026-03-10T09:57:21.211 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:57:21.211 INFO:teuthology.orchestra.run.vm01.stdout: sudo /home/ubuntu/cephtest/cephadm shell 2026-03-10T09:57:21.211 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:57:21.211 INFO:teuthology.orchestra.run.vm01.stdout:Please consider enabling telemetry to help improve Ceph: 2026-03-10T09:57:21.211 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:57:21.211 INFO:teuthology.orchestra.run.vm01.stdout: ceph telemetry on 2026-03-10T09:57:21.211 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:57:21.211 INFO:teuthology.orchestra.run.vm01.stdout:For more information see: 2026-03-10T09:57:21.211 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:57:21.211 INFO:teuthology.orchestra.run.vm01.stdout: https://docs.ceph.com/en/latest/mgr/telemetry/ 2026-03-10T09:57:21.212 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:57:21.212 INFO:teuthology.orchestra.run.vm01.stdout:Bootstrap complete. 2026-03-10T09:57:21.251 INFO:tasks.cephadm:Fetching config... 2026-03-10T09:57:21.252 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T09:57:21.252 DEBUG:teuthology.orchestra.run.vm01:> dd if=/etc/ceph/ceph.conf of=/dev/stdout 2026-03-10T09:57:21.273 INFO:tasks.cephadm:Fetching client.admin keyring... 2026-03-10T09:57:21.273 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T09:57:21.273 DEBUG:teuthology.orchestra.run.vm01:> dd if=/etc/ceph/ceph.client.admin.keyring of=/dev/stdout 2026-03-10T09:57:21.320 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:21 vm01 ceph-mon[51930]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:57:21.320 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:21 vm01 ceph-mon[51930]: from='client.? 192.168.123.101:0/2526263862' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-10T09:57:21.320 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:21 vm01 ceph-mon[51930]: mgrmap e12: a(active, since 2s) 2026-03-10T09:57:21.320 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:21 vm01 ceph-mon[51930]: from='client.? 192.168.123.101:0/3004556427' entity='client.admin' 2026-03-10T09:57:21.339 INFO:tasks.cephadm:Fetching mon keyring... 2026-03-10T09:57:21.339 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T09:57:21.339 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/mon.a/keyring of=/dev/stdout 2026-03-10T09:57:21.411 INFO:tasks.cephadm:Fetching pub ssh key... 2026-03-10T09:57:21.411 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T09:57:21.411 DEBUG:teuthology.orchestra.run.vm01:> dd if=/home/ubuntu/cephtest/ceph.pub of=/dev/stdout 2026-03-10T09:57:21.471 INFO:tasks.cephadm:Installing pub ssh key for root users... 2026-03-10T09:57:21.471 DEBUG:teuthology.orchestra.run.vm01:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIORtGJi8T3ZnVUbCZDMVuz+dgV0u/IBSt3etz434OnxS ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-10T09:57:21.564 INFO:teuthology.orchestra.run.vm01.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIORtGJi8T3ZnVUbCZDMVuz+dgV0u/IBSt3etz434OnxS ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d 2026-03-10T09:57:21.578 DEBUG:teuthology.orchestra.run.vm02:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIORtGJi8T3ZnVUbCZDMVuz+dgV0u/IBSt3etz434OnxS ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-10T09:57:21.619 INFO:teuthology.orchestra.run.vm02.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIORtGJi8T3ZnVUbCZDMVuz+dgV0u/IBSt3etz434OnxS ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d 2026-03-10T09:57:21.628 DEBUG:teuthology.orchestra.run.vm08:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIORtGJi8T3ZnVUbCZDMVuz+dgV0u/IBSt3etz434OnxS ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-10T09:57:21.662 INFO:teuthology.orchestra.run.vm08.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIORtGJi8T3ZnVUbCZDMVuz+dgV0u/IBSt3etz434OnxS ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d 2026-03-10T09:57:21.673 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d -- ceph config set mgr mgr/cephadm/allow_ptrace true 2026-03-10T09:57:21.876 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/mon.a/config 2026-03-10T09:57:22.329 INFO:tasks.cephadm:Distributing conf and client.admin keyring to all hosts + 0755 2026-03-10T09:57:22.330 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d -- ceph orch client-keyring set client.admin '*' --mode 0755 2026-03-10T09:57:22.578 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/mon.a/config 2026-03-10T09:57:23.011 INFO:tasks.cephadm:Writing (initial) conf and keyring to vm02 2026-03-10T09:57:23.011 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T09:57:23.011 DEBUG:teuthology.orchestra.run.vm02:> dd of=/etc/ceph/ceph.conf 2026-03-10T09:57:23.027 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T09:57:23.027 DEBUG:teuthology.orchestra.run.vm02:> dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T09:57:23.082 INFO:tasks.cephadm:Adding host vm02 to orchestrator... 2026-03-10T09:57:23.082 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d -- ceph orch host add vm02 2026-03-10T09:57:23.332 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/mon.a/config 2026-03-10T09:57:23.386 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:23 vm01 ceph-mon[51930]: from='client.? 192.168.123.101:0/2512564205' entity='client.admin' 2026-03-10T09:57:23.386 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:23 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:24.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:24 vm01 ceph-mon[51930]: from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:57:24.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:24 vm01 ceph-mon[51930]: from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm02", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:57:24.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:24 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:24.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:24 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:24.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:24 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-10T09:57:24.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:24 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:57:24.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:24 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:57:25.248 INFO:teuthology.orchestra.run.vm01.stdout:Added host 'vm02' with addr '192.168.123.102' 2026-03-10T09:57:25.408 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d -- ceph orch host ls --format=json 2026-03-10T09:57:25.581 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/mon.a/config 2026-03-10T09:57:25.606 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:25 vm01 ceph-mon[51930]: Updating vm01:/etc/ceph/ceph.conf 2026-03-10T09:57:25.606 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:25 vm01 ceph-mon[51930]: Updating vm01:/var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/config/ceph.conf 2026-03-10T09:57:25.606 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:25 vm01 ceph-mon[51930]: Updating vm01:/etc/ceph/ceph.client.admin.keyring 2026-03-10T09:57:25.606 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:25 vm01 ceph-mon[51930]: Deploying cephadm binary to vm02 2026-03-10T09:57:25.606 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:25 vm01 ceph-mon[51930]: Updating vm01:/var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/config/ceph.client.admin.keyring 2026-03-10T09:57:25.606 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:25 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:25.606 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:25 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:25.606 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:25 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:25.606 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:25 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:57:25.606 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:25 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:57:25.606 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:25 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:57:25.606 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:25 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:25.606 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:25 vm01 ceph-mon[51930]: mgrmap e13: a(active, since 6s) 2026-03-10T09:57:25.606 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:25 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:25.606 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:25 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:57:25.832 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:57:25.832 INFO:teuthology.orchestra.run.vm01.stdout:[{"addr": "192.168.123.101", "hostname": "vm01", "labels": [], "status": ""}, {"addr": "192.168.123.102", "hostname": "vm02", "labels": [], "status": ""}] 2026-03-10T09:57:25.989 INFO:tasks.cephadm:Writing (initial) conf and keyring to vm08 2026-03-10T09:57:25.989 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T09:57:25.989 DEBUG:teuthology.orchestra.run.vm08:> dd of=/etc/ceph/ceph.conf 2026-03-10T09:57:26.004 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T09:57:26.004 DEBUG:teuthology.orchestra.run.vm08:> dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T09:57:26.063 INFO:tasks.cephadm:Adding host vm08 to orchestrator... 2026-03-10T09:57:26.063 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d -- ceph orch host add vm08 2026-03-10T09:57:26.228 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/mon.a/config 2026-03-10T09:57:26.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:26 vm01 ceph-mon[51930]: Added host vm02 2026-03-10T09:57:26.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:26 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:26.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:26 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:27.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:27 vm01 ceph-mon[51930]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T09:57:27.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:27 vm01 ceph-mon[51930]: from='client.14178 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm08", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:57:27.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:27 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:28.031 INFO:teuthology.orchestra.run.vm01.stdout:Added host 'vm08' with addr '192.168.123.108' 2026-03-10T09:57:28.188 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d -- ceph orch host ls --format=json 2026-03-10T09:57:28.367 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/mon.a/config 2026-03-10T09:57:28.630 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:57:28.631 INFO:teuthology.orchestra.run.vm01.stdout:[{"addr": "192.168.123.101", "hostname": "vm01", "labels": [], "status": ""}, {"addr": "192.168.123.102", "hostname": "vm02", "labels": [], "status": ""}, {"addr": "192.168.123.108", "hostname": "vm08", "labels": [], "status": ""}] 2026-03-10T09:57:28.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:28 vm01 ceph-mon[51930]: Deploying cephadm binary to vm08 2026-03-10T09:57:28.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:28 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:28.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:28 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:28.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:28 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:28.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:28 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:28.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:28 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:28.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:28 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T09:57:28.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:28 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:57:28.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:28 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:57:28.790 INFO:tasks.cephadm:Setting crush tunables to default 2026-03-10T09:57:28.790 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d -- ceph osd crush tunables default 2026-03-10T09:57:28.964 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/mon.a/config 2026-03-10T09:57:29.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:29 vm01 ceph-mon[51930]: Added host vm08 2026-03-10T09:57:29.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:29 vm01 ceph-mon[51930]: Updating vm02:/etc/ceph/ceph.conf 2026-03-10T09:57:29.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:29 vm01 ceph-mon[51930]: Updating vm02:/var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/config/ceph.conf 2026-03-10T09:57:29.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:29 vm01 ceph-mon[51930]: Updating vm02:/etc/ceph/ceph.client.admin.keyring 2026-03-10T09:57:29.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:29 vm01 ceph-mon[51930]: Updating vm02:/var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/config/ceph.client.admin.keyring 2026-03-10T09:57:29.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:29 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:29.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:29 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:29.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:29 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:29.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:29 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:57:29.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:29 vm01 ceph-mon[51930]: from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T09:57:29.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:29 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:29.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:29 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:29.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:29 vm01 ceph-mon[51930]: from='client.? 192.168.123.101:0/125801956' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-10T09:57:29.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:29 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:29.981 INFO:teuthology.orchestra.run.vm01.stderr:adjusted tunables profile to default 2026-03-10T09:57:30.134 INFO:tasks.cephadm:Adding mon.a on vm01 2026-03-10T09:57:30.134 INFO:tasks.cephadm:Adding mon.b on vm02 2026-03-10T09:57:30.134 INFO:tasks.cephadm:Adding mon.c on vm08 2026-03-10T09:57:30.134 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d -- ceph orch apply mon '3;vm01:192.168.123.101=a;vm02:192.168.123.102=b;vm08:192.168.123.108=c' 2026-03-10T09:57:30.331 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T09:57:30.377 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T09:57:30.681 INFO:teuthology.orchestra.run.vm08.stdout:Scheduled mon update... 2026-03-10T09:57:30.866 DEBUG:teuthology.orchestra.run.vm02:mon.b> sudo journalctl -f -n 0 -u ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d@mon.b.service 2026-03-10T09:57:30.868 DEBUG:teuthology.orchestra.run.vm08:mon.c> sudo journalctl -f -n 0 -u ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d@mon.c.service 2026-03-10T09:57:30.870 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-10T09:57:30.870 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d -- ceph mon dump -f json 2026-03-10T09:57:31.115 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T09:57:31.154 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T09:57:31.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:30 vm01 ceph-mon[51930]: from='client.? 192.168.123.101:0/125801956' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-10T09:57:31.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:30 vm01 ceph-mon[51930]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T09:57:31.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:30 vm01 ceph-mon[51930]: from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm01:192.168.123.101=a;vm02:192.168.123.102=b;vm08:192.168.123.108=c", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:57:31.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:30 vm01 ceph-mon[51930]: Saving service mon spec with placement vm01:192.168.123.101=a;vm02:192.168.123.102=b;vm08:192.168.123.108=c;count:3 2026-03-10T09:57:31.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:30 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:31.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:30 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:31.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:30 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:31.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:30 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:31.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:30 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:31.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:30 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm08", "name": "osd_memory_target"}]: dispatch 2026-03-10T09:57:31.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:30 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:57:31.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:30 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:57:31.444 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:57:31.444 INFO:teuthology.orchestra.run.vm08.stdout:{"epoch":1,"fsid":"4533cc1c-1c67-11f1-85c0-e37e5114407d","modified":"2026-03-10T09:56:53.660147Z","created":"2026-03-10T09:56:53.660147Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:3300","nonce":0},{"type":"v1","addr":"192.168.123.101:6789","nonce":0}]},"addr":"192.168.123.101:6789/0","public_addr":"192.168.123.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T09:57:31.444 INFO:teuthology.orchestra.run.vm08.stderr:dumped monmap epoch 1 2026-03-10T09:57:32.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:31 vm01 ceph-mon[51930]: Updating vm08:/etc/ceph/ceph.conf 2026-03-10T09:57:32.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:31 vm01 ceph-mon[51930]: Updating vm08:/var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/config/ceph.conf 2026-03-10T09:57:32.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:31 vm01 ceph-mon[51930]: Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-10T09:57:32.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:31 vm01 ceph-mon[51930]: from='client.? 192.168.123.108:0/2627979316' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T09:57:32.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:31 vm01 ceph-mon[51930]: Updating vm08:/var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/config/ceph.client.admin.keyring 2026-03-10T09:57:32.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:31 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:32.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:31 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:32.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:31 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:32.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:31 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T09:57:32.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:31 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:57:32.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:31 vm01 ceph-mon[51930]: Deploying daemon mon.c on vm08 2026-03-10T09:57:32.603 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-10T09:57:32.603 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d -- ceph mon dump -f json 2026-03-10T09:57:32.908 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/mon.c/config 2026-03-10T09:57:33.292 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 podman[55442]: 2026-03-10 09:57:33.249739108 +0000 UTC m=+0.021569229 container create 94d054009b9423234954bab37992987140dd51df4c879312f737318931726c60 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mon-c, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-10T09:57:33.613 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 podman[55442]: 2026-03-10 09:57:33.297364614 +0000 UTC m=+0.069194735 container init 94d054009b9423234954bab37992987140dd51df4c879312f737318931726c60 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mon-c, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, CEPH_REF=squid) 2026-03-10T09:57:33.613 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 podman[55442]: 2026-03-10 09:57:33.311883494 +0000 UTC m=+0.083713615 container start 94d054009b9423234954bab37992987140dd51df4c879312f737318931726c60 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mon-c, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-10T09:57:33.613 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 bash[55442]: 94d054009b9423234954bab37992987140dd51df4c879312f737318931726c60 2026-03-10T09:57:33.613 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 podman[55442]: 2026-03-10 09:57:33.237669112 +0000 UTC m=+0.009499233 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T09:57:33.613 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 systemd[1]: Started Ceph mon.c for 4533cc1c-1c67-11f1-85c0-e37e5114407d. 2026-03-10T09:57:33.613 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: set uid:gid to 167:167 (ceph:ceph) 2026-03-10T09:57:33.613 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 7 2026-03-10T09:57:33.613 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: pidfile_write: ignore empty --pid-file 2026-03-10T09:57:33.613 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: load: jerasure load: lrc 2026-03-10T09:57:33.613 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: RocksDB version: 7.9.2 2026-03-10T09:57:33.613 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Git sha 0 2026-03-10T09:57:33.613 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Compile date 2026-02-25 18:11:04 2026-03-10T09:57:33.613 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: DB SUMMARY 2026-03-10T09:57:33.613 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: DB Session ID: UY88N0UOZ1CAEC5KXJ19 2026-03-10T09:57:33.613 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: CURRENT file: CURRENT 2026-03-10T09:57:33.613 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: IDENTITY file: IDENTITY 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: MANIFEST file: MANIFEST-000005 size: 59 Bytes 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: SST files in /var/lib/ceph/mon/ceph-c/store.db dir, Total Num: 0, files: 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-c/store.db: 000004.log size: 511 ; 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.error_if_exists: 0 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.create_if_missing: 0 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.paranoid_checks: 1 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.flush_verify_memtable_count: 1 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.env: 0x56355680bdc0 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.fs: PosixFileSystem 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.info_log: 0x5635578ac700 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.max_file_opening_threads: 16 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.statistics: (nil) 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.use_fsync: 0 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.max_log_file_size: 0 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.keep_log_file_num: 1000 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.recycle_log_file_num: 0 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.allow_fallocate: 1 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.allow_mmap_reads: 0 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.allow_mmap_writes: 0 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.use_direct_reads: 0 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.create_missing_column_families: 0 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.db_log_dir: 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.wal_dir: 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.advise_random_on_open: 1 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.db_write_buffer_size: 0 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.write_buffer_manager: 0x5635578b1900 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.rate_limiter: (nil) 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.wal_recovery_mode: 2 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.enable_thread_tracking: 0 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.enable_pipelined_write: 0 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.unordered_write: 0 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.row_cache: None 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.wal_filter: None 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.allow_ingest_behind: 0 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.two_write_queues: 0 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.manual_wal_flush: 0 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.wal_compression: 0 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.atomic_flush: 0 2026-03-10T09:57:33.614 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.log_readahead_size: 0 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.best_efforts_recovery: 0 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.allow_data_in_errors: 0 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.db_host_id: __hostname__ 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.enforce_single_del_contracts: true 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.max_background_jobs: 2 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.max_background_compactions: -1 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.max_subcompactions: 1 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.max_total_wal_size: 0 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.max_open_files: -1 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.bytes_per_sync: 0 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.compaction_readahead_size: 0 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.max_background_flushes: -1 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Compression algorithms supported: 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: kZSTD supported: 0 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: kXpressCompression supported: 0 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: kBZip2Compression supported: 0 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: kLZ4Compression supported: 1 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: kZlibCompression supported: 1 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: kLZ4HCCompression supported: 1 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: kSnappyCompression supported: 1 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: DMutex implementation: pthread_mutex_t 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-c/store.db/MANIFEST-000005 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.merge_operator: 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.compaction_filter: None 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.compaction_filter_factory: None 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.sst_partitioner_factory: None 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5635578ac640) 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout: cache_index_and_filter_blocks: 1 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout: pin_top_level_index_and_filter: 1 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout: index_type: 0 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout: data_block_index_type: 0 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout: index_shortening: 1 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout: data_block_hash_table_util_ratio: 0.750000 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout: checksum: 4 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout: no_block_cache: 0 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout: block_cache: 0x5635578d1350 2026-03-10T09:57:33.615 INFO:journalctl@ceph.mon.c.vm08.stdout: block_cache_name: BinnedLRUCache 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout: block_cache_options: 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout: capacity : 536870912 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout: num_shard_bits : 4 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout: strict_capacity_limit : 0 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout: high_pri_pool_ratio: 0.000 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout: block_cache_compressed: (nil) 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout: persistent_cache: (nil) 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout: block_size: 4096 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout: block_size_deviation: 10 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout: block_restart_interval: 16 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout: index_block_restart_interval: 1 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout: metadata_block_size: 4096 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout: partition_filters: 0 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout: use_delta_encoding: 1 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout: filter_policy: bloomfilter 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout: whole_key_filtering: 1 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout: verify_compression: 0 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout: read_amp_bytes_per_bit: 0 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout: format_version: 5 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout: enable_index_compression: 1 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout: block_align: 0 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout: max_auto_readahead_size: 262144 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout: prepopulate_block_cache: 0 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout: initial_auto_readahead_size: 8192 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout: num_file_reads_for_auto_readahead: 2 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.write_buffer_size: 33554432 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.max_write_buffer_number: 2 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.compression: NoCompression 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.bottommost_compression: Disabled 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.prefix_extractor: nullptr 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.num_levels: 7 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.compression_opts.level: 32767 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.compression_opts.strategy: 0 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.compression_opts.enabled: false 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.target_file_size_base: 67108864 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T09:57:33.616 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.arena_block_size: 1048576 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.disable_auto_compactions: 0 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.inplace_update_support: 0 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.bloom_locality: 0 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.max_successive_merges: 0 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.paranoid_file_checks: 0 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.force_consistency_checks: 1 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.report_bg_io_stats: 0 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.ttl: 2592000 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.enable_blob_files: false 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.min_blob_size: 0 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.blob_file_size: 268435456 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.blob_file_starting_level: 0 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-c/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 05e1dd46-5343-4825-8f0e-6bb6b4c305f2 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773136653342631, "job": 1, "event": "recovery_started", "wal_files": [4]} 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773136653343216, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1643, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773136653, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "05e1dd46-5343-4825-8f0e-6bb6b4c305f2", "db_session_id": "UY88N0UOZ1CAEC5KXJ19", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}} 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773136653343279, "job": 1, "event": "recovery_finished"} 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: [db/version_set.cc:5047] Creating manifest 10 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-c/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5635578d2e00 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: DB pointer 0x5635579e8000 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: mon.c does not exist in monmap, will attempt to join an existing cluster 2026-03-10T09:57:33.617 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: using public_addr v2:192.168.123.108:0/0 -> [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout: ** DB Stats ** 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout: 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout: ** Compaction Stats [default] ** 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout: L0 1/0 1.60 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 2.8 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout: Sum 1/0 1.60 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 2.8 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 2.8 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout: 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout: ** Compaction Stats [default] ** 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.8 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout: 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout: 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout: Flush(GB): cumulative 0.000, interval 0.000 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout: AddFile(Total Files): cumulative 0, interval 0 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout: AddFile(L0 Files): cumulative 0, interval 0 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout: AddFile(Keys): cumulative 0, interval 0 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout: Cumulative compaction: 0.00 GB write, 0.10 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout: Interval compaction: 0.00 GB write, 0.10 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout: Block cache BinnedLRUCache@0x5635578d1350#7 capacity: 512.00 MB usage: 0.86 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 6e-06 secs_since: 0 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout: Block cache entry stats(count,size,portion): DataBlock(1,0.64 KB,0.00012219%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%) 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout: 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout: ** File Read Latency Histogram By Level [default] ** 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: starting mon.c rank -1 at public addrs [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] at bind addrs [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon_data /var/lib/ceph/mon/ceph-c fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: mon.c@-1(???) e0 preinit fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: mon.c@-1(synchronizing).mds e1 new map 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: mon.c@-1(synchronizing).mds e1 print_map 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout: e1 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout: btime 2026-03-10T09:56:55:158427+0000 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout: enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout: legacy client fscid: -1 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout: 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout: No filesystems configured 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: mon.c@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: mon.c@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: mon.c@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: mon.c@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: mon.c@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: mon.c@-1(synchronizing).osd e4 e4: 0 total, 0 up, 0 in 2026-03-10T09:57:33.618 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: mon.c@-1(synchronizing).osd e4 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: mon.c@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: mon.c@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: mon.c@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: mkfs 4533cc1c-1c67-11f1-85c0-e37e5114407d 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: monmap epoch 1 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: last_changed 2026-03-10T09:56:53.660147+0000 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: created 2026-03-10T09:56:53.660147+0000 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: min_mon_release 19 (squid) 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: election_strategy: 1 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: fsmap 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: osdmap e1: 0 total, 0 up, 0 in 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: mgrmap e1: no daemons active 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/3205654489' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/2458499503' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/2458499503' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: monmap epoch 1 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: last_changed 2026-03-10T09:56:53.660147+0000 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: created 2026-03-10T09:56:53.660147+0000 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: min_mon_release 19 (squid) 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: election_strategy: 1 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: fsmap 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: osdmap e1: 0 total, 0 up, 0 in 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: mgrmap e1: no daemons active 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/4087453228' entity='client.admin' 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/47960258' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/700166012' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: Activating manager daemon a 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: mgrmap e2: a(active, starting, since 0.00464264s) 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14100 192.168.123.101:0/4285148820' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14100 192.168.123.101:0/4285148820' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14100 192.168.123.101:0/4285148820' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14100 192.168.123.101:0/4285148820' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14100 192.168.123.101:0/4285148820' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: Manager daemon a is now available 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14100 192.168.123.101:0/4285148820' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14100 192.168.123.101:0/4285148820' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14100 192.168.123.101:0/4285148820' entity='mgr.a' 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14100 192.168.123.101:0/4285148820' entity='mgr.a' 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14100 192.168.123.101:0/4285148820' entity='mgr.a' 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: mgrmap e3: a(active, since 1.00936s) 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/1087518431' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: mgrmap e4: a(active, since 2s) 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/953795590' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T09:57:33.619 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/953795590' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/2138066393' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/2138066393' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: mgrmap e5: a(active, since 3s) 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/3207189997' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: Active manager daemon a restarted 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: Activating manager daemon a 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: osdmap e2: 0 total, 0 up, 0 in 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: mgrmap e6: a(active, starting, since 0.00512457s) 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: Manager daemon a is now available 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: Found migration_current of "None". Setting to last migration. 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: mgrmap e7: a(active, since 1.0101s) 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: [10/Mar/2026:09:57:08] ENGINE Bus STARTING 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: [10/Mar/2026:09:57:08] ENGINE Serving on http://192.168.123.101:8765 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: [10/Mar/2026:09:57:08] ENGINE Serving on https://192.168.123.101:7150 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: [10/Mar/2026:09:57:08] ENGINE Bus STARTED 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: [10/Mar/2026:09:57:08] ENGINE Client ('192.168.123.101', 43166) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: Generating ssh key... 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: mgrmap e8: a(active, since 2s) 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm01", "addr": "192.168.123.101", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: Deploying cephadm binary to vm01 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: Added host vm01 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: Saving service mon spec with placement count:5 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: Saving service mgr spec with placement count:2 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/4185460240' entity='client.admin' 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/1437623875' entity='client.admin' 2026-03-10T09:57:33.620 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/1296160380' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14118 192.168.123.101:0/10935650' entity='mgr.a' 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/1296160380' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: mgrmap e9: a(active, since 7s) 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/2935992572' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: Active manager daemon a restarted 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: Activating manager daemon a 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: osdmap e3: 0 total, 0 up, 0 in 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: mgrmap e10: a(active, starting, since 0.00643441s) 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: Manager daemon a is now available 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: mgrmap e11: a(active, since 1.01023s) 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: [10/Mar/2026:09:57:19] ENGINE Bus STARTING 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: [10/Mar/2026:09:57:19] ENGINE Serving on https://192.168.123.101:7150 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: [10/Mar/2026:09:57:19] ENGINE Client ('192.168.123.101', 51966) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: [10/Mar/2026:09:57:19] ENGINE Serving on http://192.168.123.101:8765 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: [10/Mar/2026:09:57:19] ENGINE Bus STARTED 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/2526263862' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: mgrmap e12: a(active, since 2s) 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/3004556427' entity='client.admin' 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/2512564205' entity='client.admin' 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm02", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: Updating vm01:/etc/ceph/ceph.conf 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: Updating vm01:/var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/config/ceph.conf 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: Updating vm01:/etc/ceph/ceph.client.admin.keyring 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: Deploying cephadm binary to vm02 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: Updating vm01:/var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/config/ceph.client.admin.keyring 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: mgrmap e13: a(active, since 6s) 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:33.621 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:57:33.622 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: Added host vm02 2026-03-10T09:57:33.622 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:33.622 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:33.622 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T09:57:33.622 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='client.14178 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm08", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:57:33.622 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:33.622 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: Deploying cephadm binary to vm08 2026-03-10T09:57:33.622 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:33.622 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:33.622 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:33.622 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:33.622 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:33.622 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T09:57:33.622 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:57:33.622 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:57:33.622 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: Added host vm08 2026-03-10T09:57:33.622 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: Updating vm02:/etc/ceph/ceph.conf 2026-03-10T09:57:33.622 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: Updating vm02:/var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/config/ceph.conf 2026-03-10T09:57:33.622 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: Updating vm02:/etc/ceph/ceph.client.admin.keyring 2026-03-10T09:57:33.622 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: Updating vm02:/var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/config/ceph.client.admin.keyring 2026-03-10T09:57:33.622 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:33.622 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:33.622 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:33.622 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:57:33.622 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T09:57:33.622 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:33.622 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:33.622 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/125801956' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-10T09:57:33.622 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:33.622 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/125801956' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-10T09:57:33.622 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T09:57:33.622 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm01:192.168.123.101=a;vm02:192.168.123.102=b;vm08:192.168.123.108=c", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:57:33.622 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: Saving service mon spec with placement vm01:192.168.123.101=a;vm02:192.168.123.102=b;vm08:192.168.123.108=c;count:3 2026-03-10T09:57:33.622 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:33.622 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:33.622 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:33.622 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:33.622 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:33.622 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm08", "name": "osd_memory_target"}]: dispatch 2026-03-10T09:57:33.622 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:57:33.622 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:57:33.622 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: Updating vm08:/etc/ceph/ceph.conf 2026-03-10T09:57:33.622 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: Updating vm08:/var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/config/ceph.conf 2026-03-10T09:57:33.622 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-10T09:57:33.622 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='client.? 192.168.123.108:0/2627979316' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T09:57:33.622 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: Updating vm08:/var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/config/ceph.client.admin.keyring 2026-03-10T09:57:33.623 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:33.623 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:33.623 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:33.623 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T09:57:33.623 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:57:33.623 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: Deploying daemon mon.c on vm08 2026-03-10T09:57:33.623 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:33 vm08 ceph-mon[55477]: mon.c@-1(synchronizing).paxosservice(auth 1..3) refresh upgraded, format 0 -> 3 2026-03-10T09:57:35.411 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:35 vm02 ceph-mon[54811]: mon.b@-1(synchronizing).paxosservice(auth 1..3) refresh upgraded, format 0 -> 3 2026-03-10T09:57:38.405 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:57:38.405 INFO:teuthology.orchestra.run.vm08.stdout:{"epoch":2,"fsid":"4533cc1c-1c67-11f1-85c0-e37e5114407d","modified":"2026-03-10T09:57:33.381769Z","created":"2026-03-10T09:56:53.660147Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:3300","nonce":0},{"type":"v1","addr":"192.168.123.101:6789","nonce":0}]},"addr":"192.168.123.101:6789/0","public_addr":"192.168.123.101:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"c","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:3300","nonce":0},{"type":"v1","addr":"192.168.123.108:6789","nonce":0}]},"addr":"192.168.123.108:6789/0","public_addr":"192.168.123.108:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1]} 2026-03-10T09:57:38.405 INFO:teuthology.orchestra.run.vm08.stderr:dumped monmap epoch 2 2026-03-10T09:57:38.790 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:38 vm08 ceph-mon[55477]: Deploying daemon mon.b on vm02 2026-03-10T09:57:38.790 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:38 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T09:57:38.790 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:38 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T09:57:38.790 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:38 vm08 ceph-mon[55477]: mon.a calling monitor election 2026-03-10T09:57:38.790 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:38 vm08 ceph-mon[55477]: from='client.? 192.168.123.108:0/4253079873' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T09:57:38.790 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:38 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T09:57:38.790 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:38 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T09:57:38.790 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:38 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T09:57:38.790 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:38 vm08 ceph-mon[55477]: mon.c calling monitor election 2026-03-10T09:57:38.790 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:38 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T09:57:38.790 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:38 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T09:57:38.790 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:38 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T09:57:38.790 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:38 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T09:57:38.790 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:38 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T09:57:38.790 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:38 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T09:57:38.790 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:38 vm08 ceph-mon[55477]: mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-10T09:57:38.790 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:38 vm08 ceph-mon[55477]: monmap epoch 2 2026-03-10T09:57:38.790 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:38 vm08 ceph-mon[55477]: fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d 2026-03-10T09:57:38.790 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:38 vm08 ceph-mon[55477]: last_changed 2026-03-10T09:57:33.381769+0000 2026-03-10T09:57:38.790 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:38 vm08 ceph-mon[55477]: created 2026-03-10T09:56:53.660147+0000 2026-03-10T09:57:38.790 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:38 vm08 ceph-mon[55477]: min_mon_release 19 (squid) 2026-03-10T09:57:38.790 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:38 vm08 ceph-mon[55477]: election_strategy: 1 2026-03-10T09:57:38.790 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:38 vm08 ceph-mon[55477]: 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-10T09:57:38.790 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:38 vm08 ceph-mon[55477]: 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-10T09:57:38.790 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:38 vm08 ceph-mon[55477]: fsmap 2026-03-10T09:57:38.790 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:38 vm08 ceph-mon[55477]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T09:57:38.790 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:38 vm08 ceph-mon[55477]: mgrmap e13: a(active, since 20s) 2026-03-10T09:57:38.790 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:38 vm08 ceph-mon[55477]: overall HEALTH_OK 2026-03-10T09:57:38.790 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:38 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:38.790 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:38 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:38.790 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:38 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:38.791 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:38 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:38.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:38 vm01 ceph-mon[51930]: Deploying daemon mon.b on vm02 2026-03-10T09:57:38.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:38 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T09:57:38.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:38 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T09:57:38.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:38 vm01 ceph-mon[51930]: mon.a calling monitor election 2026-03-10T09:57:38.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:38 vm01 ceph-mon[51930]: from='client.? 192.168.123.108:0/4253079873' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T09:57:38.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:38 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T09:57:38.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:38 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T09:57:38.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:38 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T09:57:38.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:38 vm01 ceph-mon[51930]: mon.c calling monitor election 2026-03-10T09:57:38.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:38 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T09:57:38.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:38 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T09:57:38.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:38 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T09:57:38.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:38 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T09:57:38.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:38 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T09:57:38.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:38 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T09:57:38.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:38 vm01 ceph-mon[51930]: mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-10T09:57:38.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:38 vm01 ceph-mon[51930]: monmap epoch 2 2026-03-10T09:57:38.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:38 vm01 ceph-mon[51930]: fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d 2026-03-10T09:57:38.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:38 vm01 ceph-mon[51930]: last_changed 2026-03-10T09:57:33.381769+0000 2026-03-10T09:57:38.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:38 vm01 ceph-mon[51930]: created 2026-03-10T09:56:53.660147+0000 2026-03-10T09:57:38.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:38 vm01 ceph-mon[51930]: min_mon_release 19 (squid) 2026-03-10T09:57:38.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:38 vm01 ceph-mon[51930]: election_strategy: 1 2026-03-10T09:57:38.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:38 vm01 ceph-mon[51930]: 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-10T09:57:38.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:38 vm01 ceph-mon[51930]: 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-10T09:57:38.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:38 vm01 ceph-mon[51930]: fsmap 2026-03-10T09:57:38.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:38 vm01 ceph-mon[51930]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T09:57:38.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:38 vm01 ceph-mon[51930]: mgrmap e13: a(active, since 20s) 2026-03-10T09:57:38.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:38 vm01 ceph-mon[51930]: overall HEALTH_OK 2026-03-10T09:57:38.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:38 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:38.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:38 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:38.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:38 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:38.930 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:38 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:39.562 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-10T09:57:39.562 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d -- ceph mon dump -f json 2026-03-10T09:57:39.679 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:39 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:39.381+0000 7fc2089b1640 -1 mgr.server handle_report got status from non-daemon mon.c 2026-03-10T09:57:39.733 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/mon.c/config 2026-03-10T09:57:41.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:41 vm02 ceph-mon[54811]: Deploying daemon mon.b on vm02 2026-03-10T09:57:41.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:41 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T09:57:41.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:41 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T09:57:41.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:41 vm02 ceph-mon[54811]: mon.a calling monitor election 2026-03-10T09:57:41.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:41 vm02 ceph-mon[54811]: from='client.? 192.168.123.108:0/4253079873' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T09:57:41.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:41 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T09:57:41.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:41 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T09:57:41.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:41 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T09:57:41.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:41 vm02 ceph-mon[54811]: mon.c calling monitor election 2026-03-10T09:57:41.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:41 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T09:57:41.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:41 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T09:57:41.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:41 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T09:57:41.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:41 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T09:57:41.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:41 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T09:57:41.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:41 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T09:57:41.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:41 vm02 ceph-mon[54811]: mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-10T09:57:41.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:41 vm02 ceph-mon[54811]: monmap epoch 2 2026-03-10T09:57:41.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:41 vm02 ceph-mon[54811]: fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d 2026-03-10T09:57:41.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:41 vm02 ceph-mon[54811]: last_changed 2026-03-10T09:57:33.381769+0000 2026-03-10T09:57:41.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:41 vm02 ceph-mon[54811]: created 2026-03-10T09:56:53.660147+0000 2026-03-10T09:57:41.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:41 vm02 ceph-mon[54811]: min_mon_release 19 (squid) 2026-03-10T09:57:41.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:41 vm02 ceph-mon[54811]: election_strategy: 1 2026-03-10T09:57:41.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:41 vm02 ceph-mon[54811]: 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-10T09:57:41.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:41 vm02 ceph-mon[54811]: 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-10T09:57:41.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:41 vm02 ceph-mon[54811]: fsmap 2026-03-10T09:57:41.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:41 vm02 ceph-mon[54811]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T09:57:41.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:41 vm02 ceph-mon[54811]: mgrmap e13: a(active, since 20s) 2026-03-10T09:57:41.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:41 vm02 ceph-mon[54811]: overall HEALTH_OK 2026-03-10T09:57:41.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:41 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:41.660 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:41 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:41.660 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:41 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:41.660 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:41 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:44.418 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:57:44.418 INFO:teuthology.orchestra.run.vm08.stdout:{"epoch":3,"fsid":"4533cc1c-1c67-11f1-85c0-e37e5114407d","modified":"2026-03-10T09:57:39.106959Z","created":"2026-03-10T09:56:53.660147Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:3300","nonce":0},{"type":"v1","addr":"192.168.123.101:6789","nonce":0}]},"addr":"192.168.123.101:6789/0","public_addr":"192.168.123.101:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"c","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:3300","nonce":0},{"type":"v1","addr":"192.168.123.108:6789","nonce":0}]},"addr":"192.168.123.108:6789/0","public_addr":"192.168.123.108:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:3300","nonce":0},{"type":"v1","addr":"192.168.123.102:6789","nonce":0}]},"addr":"192.168.123.102:6789/0","public_addr":"192.168.123.102:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1]} 2026-03-10T09:57:44.419 INFO:teuthology.orchestra.run.vm08.stderr:dumped monmap epoch 3 2026-03-10T09:57:44.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:44 vm01 ceph-mon[51930]: Updating vm01:/etc/ceph/ceph.conf 2026-03-10T09:57:44.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:44 vm01 ceph-mon[51930]: Updating vm02:/etc/ceph/ceph.conf 2026-03-10T09:57:44.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:44 vm01 ceph-mon[51930]: Updating vm08:/etc/ceph/ceph.conf 2026-03-10T09:57:44.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:44 vm01 ceph-mon[51930]: Updating vm08:/var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/config/ceph.conf 2026-03-10T09:57:44.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:44 vm01 ceph-mon[51930]: Updating vm02:/var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/config/ceph.conf 2026-03-10T09:57:44.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:44 vm01 ceph-mon[51930]: Updating vm01:/var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/config/ceph.conf 2026-03-10T09:57:44.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:44 vm01 ceph-mon[51930]: mon.a calling monitor election 2026-03-10T09:57:44.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:44 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T09:57:44.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:44 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T09:57:44.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:44 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T09:57:44.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:44 vm01 ceph-mon[51930]: mon.c calling monitor election 2026-03-10T09:57:44.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:44 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T09:57:44.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:44 vm01 ceph-mon[51930]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:57:44.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:44 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T09:57:44.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:44 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T09:57:44.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:44 vm01 ceph-mon[51930]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:57:44.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:44 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T09:57:44.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:44 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T09:57:44.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:44 vm01 ceph-mon[51930]: mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-10T09:57:44.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:44 vm01 ceph-mon[51930]: monmap epoch 3 2026-03-10T09:57:44.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:44 vm01 ceph-mon[51930]: fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d 2026-03-10T09:57:44.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:44 vm01 ceph-mon[51930]: last_changed 2026-03-10T09:57:39.106959+0000 2026-03-10T09:57:44.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:44 vm01 ceph-mon[51930]: created 2026-03-10T09:56:53.660147+0000 2026-03-10T09:57:44.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:44 vm01 ceph-mon[51930]: min_mon_release 19 (squid) 2026-03-10T09:57:44.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:44 vm01 ceph-mon[51930]: election_strategy: 1 2026-03-10T09:57:44.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:44 vm01 ceph-mon[51930]: 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-10T09:57:44.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:44 vm01 ceph-mon[51930]: 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-10T09:57:44.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:44 vm01 ceph-mon[51930]: 2: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.b 2026-03-10T09:57:44.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:44 vm01 ceph-mon[51930]: fsmap 2026-03-10T09:57:44.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:44 vm01 ceph-mon[51930]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T09:57:44.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:44 vm01 ceph-mon[51930]: mgrmap e13: a(active, since 25s) 2026-03-10T09:57:44.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:44 vm01 ceph-mon[51930]: overall HEALTH_OK 2026-03-10T09:57:44.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:44 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:44.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:44 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:44.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:44 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:44.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:44 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:44.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:44 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:44.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:44 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:44.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:44 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:44.423 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:44 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:44.431 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:44 vm08 ceph-mon[55477]: Updating vm01:/etc/ceph/ceph.conf 2026-03-10T09:57:44.431 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:44 vm08 ceph-mon[55477]: Updating vm02:/etc/ceph/ceph.conf 2026-03-10T09:57:44.431 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:44 vm08 ceph-mon[55477]: Updating vm08:/etc/ceph/ceph.conf 2026-03-10T09:57:44.431 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:44 vm08 ceph-mon[55477]: Updating vm08:/var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/config/ceph.conf 2026-03-10T09:57:44.431 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:44 vm08 ceph-mon[55477]: Updating vm02:/var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/config/ceph.conf 2026-03-10T09:57:44.431 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:44 vm08 ceph-mon[55477]: Updating vm01:/var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/config/ceph.conf 2026-03-10T09:57:44.431 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:44 vm08 ceph-mon[55477]: mon.a calling monitor election 2026-03-10T09:57:44.431 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:44 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T09:57:44.431 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:44 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T09:57:44.431 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:44 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T09:57:44.431 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:44 vm08 ceph-mon[55477]: mon.c calling monitor election 2026-03-10T09:57:44.431 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:44 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T09:57:44.431 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:44 vm08 ceph-mon[55477]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:57:44.431 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:44 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T09:57:44.431 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:44 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T09:57:44.431 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:44 vm08 ceph-mon[55477]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:57:44.431 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:44 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T09:57:44.431 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:44 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T09:57:44.431 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:44 vm08 ceph-mon[55477]: mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-10T09:57:44.431 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:44 vm08 ceph-mon[55477]: monmap epoch 3 2026-03-10T09:57:44.431 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:44 vm08 ceph-mon[55477]: fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d 2026-03-10T09:57:44.431 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:44 vm08 ceph-mon[55477]: last_changed 2026-03-10T09:57:39.106959+0000 2026-03-10T09:57:44.431 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:44 vm08 ceph-mon[55477]: created 2026-03-10T09:56:53.660147+0000 2026-03-10T09:57:44.431 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:44 vm08 ceph-mon[55477]: min_mon_release 19 (squid) 2026-03-10T09:57:44.431 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:44 vm08 ceph-mon[55477]: election_strategy: 1 2026-03-10T09:57:44.431 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:44 vm08 ceph-mon[55477]: 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-10T09:57:44.431 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:44 vm08 ceph-mon[55477]: 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-10T09:57:44.431 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:44 vm08 ceph-mon[55477]: 2: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.b 2026-03-10T09:57:44.431 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:44 vm08 ceph-mon[55477]: fsmap 2026-03-10T09:57:44.431 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:44 vm08 ceph-mon[55477]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T09:57:44.431 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:44 vm08 ceph-mon[55477]: mgrmap e13: a(active, since 25s) 2026-03-10T09:57:44.431 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:44 vm08 ceph-mon[55477]: overall HEALTH_OK 2026-03-10T09:57:44.431 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:44 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:44.431 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:44 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:44.431 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:44 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:44.431 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:44 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:44.431 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:44 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:44.431 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:44 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:44.431 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:44 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:44.431 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:44 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:44.577 INFO:tasks.cephadm:Generating final ceph.conf file... 2026-03-10T09:57:44.577 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d -- ceph config generate-minimal-conf 2026-03-10T09:57:44.787 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/mon.a/config 2026-03-10T09:57:45.069 INFO:teuthology.orchestra.run.vm01.stdout:# minimal ceph.conf for 4533cc1c-1c67-11f1-85c0-e37e5114407d 2026-03-10T09:57:45.069 INFO:teuthology.orchestra.run.vm01.stdout:[global] 2026-03-10T09:57:45.069 INFO:teuthology.orchestra.run.vm01.stdout: fsid = 4533cc1c-1c67-11f1-85c0-e37e5114407d 2026-03-10T09:57:45.069 INFO:teuthology.orchestra.run.vm01.stdout: mon_host = [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] 2026-03-10T09:57:45.277 INFO:tasks.cephadm:Distributing (final) config and client.admin keyring... 2026-03-10T09:57:45.277 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T09:57:45.278 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/etc/ceph/ceph.conf 2026-03-10T09:57:45.307 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T09:57:45.307 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T09:57:45.376 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:45 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:45.376 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:45 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:45.376 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:45 vm01 ceph-mon[51930]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:57:45.376 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:45 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:45.376 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:45 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:45.376 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:45 vm01 ceph-mon[51930]: Reconfiguring mon.a (unknown last config time)... 2026-03-10T09:57:45.376 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:45 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T09:57:45.376 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:45 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T09:57:45.376 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:45 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:57:45.376 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:45 vm01 ceph-mon[51930]: Reconfiguring daemon mon.a on vm01 2026-03-10T09:57:45.376 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:45 vm01 ceph-mon[51930]: from='client.? 192.168.123.108:0/3173375146' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T09:57:45.377 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:45 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:45.377 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:45 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:45.377 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:45 vm01 ceph-mon[51930]: Reconfiguring mon.b (monmap changed)... 2026-03-10T09:57:45.377 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:45 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T09:57:45.377 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:45 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T09:57:45.377 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:45 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:57:45.377 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:45 vm01 ceph-mon[51930]: Reconfiguring daemon mon.b on vm02 2026-03-10T09:57:45.377 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:45 vm01 ceph-mon[51930]: from='client.? 192.168.123.101:0/2922559671' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:57:45.377 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:45 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:45.377 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:45 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:45.377 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:45 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T09:57:45.377 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:45 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T09:57:45.377 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:45 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:57:45.377 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:45 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T09:57:45.380 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T09:57:45.380 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/etc/ceph/ceph.conf 2026-03-10T09:57:45.419 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T09:57:45.419 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T09:57:45.492 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T09:57:45.492 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/etc/ceph/ceph.conf 2026-03-10T09:57:45.524 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:45 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:45.524 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:45 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:45.524 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:45 vm08 ceph-mon[55477]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:57:45.524 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:45 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:45.524 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:45 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:45.524 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:45 vm08 ceph-mon[55477]: Reconfiguring mon.a (unknown last config time)... 2026-03-10T09:57:45.524 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:45 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T09:57:45.524 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:45 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T09:57:45.524 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:45 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:57:45.524 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:45 vm08 ceph-mon[55477]: Reconfiguring daemon mon.a on vm01 2026-03-10T09:57:45.524 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:45 vm08 ceph-mon[55477]: from='client.? 192.168.123.108:0/3173375146' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T09:57:45.524 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:45 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:45.524 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:45 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:45.524 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:45 vm08 ceph-mon[55477]: Reconfiguring mon.b (monmap changed)... 2026-03-10T09:57:45.524 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:45 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T09:57:45.524 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:45 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T09:57:45.524 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:45 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:57:45.525 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:45 vm08 ceph-mon[55477]: Reconfiguring daemon mon.b on vm02 2026-03-10T09:57:45.525 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:45 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/2922559671' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:57:45.525 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:45 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:45.525 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:45 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:45.525 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:45 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T09:57:45.525 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:45 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T09:57:45.525 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:45 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:57:45.525 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:45 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T09:57:45.541 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T09:57:45.541 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T09:57:45.612 INFO:tasks.cephadm:Adding mgr.a on vm01 2026-03-10T09:57:45.612 INFO:tasks.cephadm:Adding mgr.b on vm02 2026-03-10T09:57:45.612 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d -- ceph orch apply mgr '2;vm01=a;vm02=b' 2026-03-10T09:57:45.865 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/mon.c/config 2026-03-10T09:57:46.150 INFO:teuthology.orchestra.run.vm08.stdout:Scheduled mgr update... 2026-03-10T09:57:46.331 DEBUG:teuthology.orchestra.run.vm02:mgr.b> sudo journalctl -f -n 0 -u ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d@mgr.b.service 2026-03-10T09:57:46.333 INFO:tasks.cephadm:Deploying OSDs... 2026-03-10T09:57:46.333 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T09:57:46.333 DEBUG:teuthology.orchestra.run.vm01:> dd if=/scratch_devs of=/dev/stdout 2026-03-10T09:57:46.351 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T09:57:46.352 DEBUG:teuthology.orchestra.run.vm01:> ls /dev/[sv]d? 2026-03-10T09:57:46.411 INFO:teuthology.orchestra.run.vm01.stdout:/dev/vda 2026-03-10T09:57:46.411 INFO:teuthology.orchestra.run.vm01.stdout:/dev/vdb 2026-03-10T09:57:46.411 INFO:teuthology.orchestra.run.vm01.stdout:/dev/vdc 2026-03-10T09:57:46.411 INFO:teuthology.orchestra.run.vm01.stdout:/dev/vdd 2026-03-10T09:57:46.411 INFO:teuthology.orchestra.run.vm01.stdout:/dev/vde 2026-03-10T09:57:46.411 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-10T09:57:46.411 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-10T09:57:46.411 DEBUG:teuthology.orchestra.run.vm01:> stat /dev/vdb 2026-03-10T09:57:46.465 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: Updating vm01:/etc/ceph/ceph.conf 2026-03-10T09:57:46.465 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: Updating vm02:/etc/ceph/ceph.conf 2026-03-10T09:57:46.465 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: Updating vm08:/etc/ceph/ceph.conf 2026-03-10T09:57:46.465 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: Updating vm08:/var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/config/ceph.conf 2026-03-10T09:57:46.465 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: Updating vm02:/var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/config/ceph.conf 2026-03-10T09:57:46.465 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: Updating vm01:/var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/config/ceph.conf 2026-03-10T09:57:46.465 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: mon.a calling monitor election 2026-03-10T09:57:46.465 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T09:57:46.465 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T09:57:46.465 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T09:57:46.465 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: mon.c calling monitor election 2026-03-10T09:57:46.465 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T09:57:46.465 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:57:46.465 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T09:57:46.465 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T09:57:46.465 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:57:46.465 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T09:57:46.465 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T09:57:46.465 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-10T09:57:46.465 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: monmap epoch 3 2026-03-10T09:57:46.465 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d 2026-03-10T09:57:46.466 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: last_changed 2026-03-10T09:57:39.106959+0000 2026-03-10T09:57:46.466 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: created 2026-03-10T09:56:53.660147+0000 2026-03-10T09:57:46.466 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: min_mon_release 19 (squid) 2026-03-10T09:57:46.466 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: election_strategy: 1 2026-03-10T09:57:46.466 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-10T09:57:46.466 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-10T09:57:46.466 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: 2: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.b 2026-03-10T09:57:46.466 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: fsmap 2026-03-10T09:57:46.466 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T09:57:46.466 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: mgrmap e13: a(active, since 25s) 2026-03-10T09:57:46.466 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: overall HEALTH_OK 2026-03-10T09:57:46.466 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:46.466 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:46.466 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:46.466 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:46.466 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:46.466 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:46.466 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:46.466 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:46.466 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:46.466 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:46.466 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:57:46.466 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:46.466 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:46.466 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: Reconfiguring mon.a (unknown last config time)... 2026-03-10T09:57:46.466 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T09:57:46.466 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T09:57:46.466 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:57:46.466 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: Reconfiguring daemon mon.a on vm01 2026-03-10T09:57:46.466 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: from='client.? 192.168.123.108:0/3173375146' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T09:57:46.466 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:46.466 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:46.466 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: Reconfiguring mon.b (monmap changed)... 2026-03-10T09:57:46.466 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T09:57:46.466 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T09:57:46.466 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:57:46.466 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: Reconfiguring daemon mon.b on vm02 2026-03-10T09:57:46.466 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: from='client.? 192.168.123.101:0/2922559671' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:57:46.467 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:46.467 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:46.467 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T09:57:46.467 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T09:57:46.467 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:57:46.467 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:46 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T09:57:46.472 INFO:teuthology.orchestra.run.vm01.stdout: File: /dev/vdb 2026-03-10T09:57:46.472 INFO:teuthology.orchestra.run.vm01.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T09:57:46.472 INFO:teuthology.orchestra.run.vm01.stdout:Device: 6h/6d Inode: 221 Links: 1 Device type: fc,10 2026-03-10T09:57:46.472 INFO:teuthology.orchestra.run.vm01.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T09:57:46.472 INFO:teuthology.orchestra.run.vm01.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T09:57:46.472 INFO:teuthology.orchestra.run.vm01.stdout:Access: 2026-03-10 09:57:23.494783199 +0000 2026-03-10T09:57:46.472 INFO:teuthology.orchestra.run.vm01.stdout:Modify: 2026-03-10 09:54:19.841420223 +0000 2026-03-10T09:57:46.472 INFO:teuthology.orchestra.run.vm01.stdout:Change: 2026-03-10 09:54:19.841420223 +0000 2026-03-10T09:57:46.472 INFO:teuthology.orchestra.run.vm01.stdout: Birth: 2026-03-10 09:51:26.219000000 +0000 2026-03-10T09:57:46.472 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-10T09:57:46.537 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records in 2026-03-10T09:57:46.537 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records out 2026-03-10T09:57:46.537 INFO:teuthology.orchestra.run.vm01.stderr:512 bytes copied, 0.000208269 s, 2.5 MB/s 2026-03-10T09:57:46.538 DEBUG:teuthology.orchestra.run.vm01:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-10T09:57:46.595 DEBUG:teuthology.orchestra.run.vm01:> stat /dev/vdc 2026-03-10T09:57:46.654 INFO:teuthology.orchestra.run.vm01.stdout: File: /dev/vdc 2026-03-10T09:57:46.654 INFO:teuthology.orchestra.run.vm01.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T09:57:46.654 INFO:teuthology.orchestra.run.vm01.stdout:Device: 6h/6d Inode: 222 Links: 1 Device type: fc,20 2026-03-10T09:57:46.654 INFO:teuthology.orchestra.run.vm01.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T09:57:46.654 INFO:teuthology.orchestra.run.vm01.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T09:57:46.654 INFO:teuthology.orchestra.run.vm01.stdout:Access: 2026-03-10 09:57:23.536783121 +0000 2026-03-10T09:57:46.654 INFO:teuthology.orchestra.run.vm01.stdout:Modify: 2026-03-10 09:54:19.791420170 +0000 2026-03-10T09:57:46.654 INFO:teuthology.orchestra.run.vm01.stdout:Change: 2026-03-10 09:54:19.791420170 +0000 2026-03-10T09:57:46.654 INFO:teuthology.orchestra.run.vm01.stdout: Birth: 2026-03-10 09:51:26.225000000 +0000 2026-03-10T09:57:46.655 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-10T09:57:46.723 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records in 2026-03-10T09:57:46.723 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records out 2026-03-10T09:57:46.723 INFO:teuthology.orchestra.run.vm01.stderr:512 bytes copied, 0.000206286 s, 2.5 MB/s 2026-03-10T09:57:46.725 DEBUG:teuthology.orchestra.run.vm01:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-10T09:57:46.782 DEBUG:teuthology.orchestra.run.vm01:> stat /dev/vdd 2026-03-10T09:57:46.841 INFO:teuthology.orchestra.run.vm01.stdout: File: /dev/vdd 2026-03-10T09:57:46.841 INFO:teuthology.orchestra.run.vm01.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T09:57:46.841 INFO:teuthology.orchestra.run.vm01.stdout:Device: 6h/6d Inode: 223 Links: 1 Device type: fc,30 2026-03-10T09:57:46.841 INFO:teuthology.orchestra.run.vm01.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T09:57:46.841 INFO:teuthology.orchestra.run.vm01.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T09:57:46.841 INFO:teuthology.orchestra.run.vm01.stdout:Access: 2026-03-10 09:57:23.587783026 +0000 2026-03-10T09:57:46.841 INFO:teuthology.orchestra.run.vm01.stdout:Modify: 2026-03-10 09:54:19.831420213 +0000 2026-03-10T09:57:46.841 INFO:teuthology.orchestra.run.vm01.stdout:Change: 2026-03-10 09:54:19.831420213 +0000 2026-03-10T09:57:46.841 INFO:teuthology.orchestra.run.vm01.stdout: Birth: 2026-03-10 09:51:26.229000000 +0000 2026-03-10T09:57:46.841 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-10T09:57:46.907 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records in 2026-03-10T09:57:46.907 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records out 2026-03-10T09:57:46.907 INFO:teuthology.orchestra.run.vm01.stderr:512 bytes copied, 0.000195045 s, 2.6 MB/s 2026-03-10T09:57:46.909 DEBUG:teuthology.orchestra.run.vm01:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-10T09:57:46.967 DEBUG:teuthology.orchestra.run.vm01:> stat /dev/vde 2026-03-10T09:57:47.029 INFO:teuthology.orchestra.run.vm01.stdout: File: /dev/vde 2026-03-10T09:57:47.029 INFO:teuthology.orchestra.run.vm01.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T09:57:47.029 INFO:teuthology.orchestra.run.vm01.stdout:Device: 6h/6d Inode: 224 Links: 1 Device type: fc,40 2026-03-10T09:57:47.029 INFO:teuthology.orchestra.run.vm01.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T09:57:47.029 INFO:teuthology.orchestra.run.vm01.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T09:57:47.029 INFO:teuthology.orchestra.run.vm01.stdout:Access: 2026-03-10 09:57:23.628782950 +0000 2026-03-10T09:57:47.029 INFO:teuthology.orchestra.run.vm01.stdout:Modify: 2026-03-10 09:54:19.781420159 +0000 2026-03-10T09:57:47.029 INFO:teuthology.orchestra.run.vm01.stdout:Change: 2026-03-10 09:54:19.781420159 +0000 2026-03-10T09:57:47.029 INFO:teuthology.orchestra.run.vm01.stdout: Birth: 2026-03-10 09:51:26.233000000 +0000 2026-03-10T09:57:47.029 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-10T09:57:47.097 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records in 2026-03-10T09:57:47.097 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records out 2026-03-10T09:57:47.097 INFO:teuthology.orchestra.run.vm01.stderr:512 bytes copied, 0.000226594 s, 2.3 MB/s 2026-03-10T09:57:47.098 DEBUG:teuthology.orchestra.run.vm01:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-10T09:57:47.146 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:57:47 vm02 systemd[1]: Starting Ceph mgr.b for 4533cc1c-1c67-11f1-85c0-e37e5114407d... 2026-03-10T09:57:47.159 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:47 vm01 ceph-mon[51930]: mon.b calling monitor election 2026-03-10T09:57:47.159 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:47 vm01 ceph-mon[51930]: from='client.14205 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm01=a;vm02=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:57:47.159 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:47 vm01 ceph-mon[51930]: Saving service mgr spec with placement vm01=a;vm02=b;count:2 2026-03-10T09:57:47.159 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:47 vm01 ceph-mon[51930]: Deploying daemon mgr.b on vm02 2026-03-10T09:57:47.159 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:47 vm01 ceph-mon[51930]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:57:47.159 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:47 vm01 ceph-mon[51930]: mon.b calling monitor election 2026-03-10T09:57:47.159 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:47 vm01 ceph-mon[51930]: mon.c calling monitor election 2026-03-10T09:57:47.159 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:47 vm01 ceph-mon[51930]: mon.a calling monitor election 2026-03-10T09:57:47.159 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:47 vm01 ceph-mon[51930]: mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T09:57:47.159 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:47 vm01 ceph-mon[51930]: monmap epoch 3 2026-03-10T09:57:47.159 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:47 vm01 ceph-mon[51930]: fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d 2026-03-10T09:57:47.164 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T09:57:47.164 DEBUG:teuthology.orchestra.run.vm02:> dd if=/scratch_devs of=/dev/stdout 2026-03-10T09:57:47.186 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T09:57:47.186 DEBUG:teuthology.orchestra.run.vm02:> ls /dev/[sv]d? 2026-03-10T09:57:47.263 INFO:teuthology.orchestra.run.vm02.stdout:/dev/vda 2026-03-10T09:57:47.263 INFO:teuthology.orchestra.run.vm02.stdout:/dev/vdb 2026-03-10T09:57:47.263 INFO:teuthology.orchestra.run.vm02.stdout:/dev/vdc 2026-03-10T09:57:47.263 INFO:teuthology.orchestra.run.vm02.stdout:/dev/vdd 2026-03-10T09:57:47.263 INFO:teuthology.orchestra.run.vm02.stdout:/dev/vde 2026-03-10T09:57:47.263 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-10T09:57:47.263 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-10T09:57:47.263 DEBUG:teuthology.orchestra.run.vm02:> stat /dev/vdb 2026-03-10T09:57:47.325 INFO:teuthology.orchestra.run.vm02.stdout: File: /dev/vdb 2026-03-10T09:57:47.325 INFO:teuthology.orchestra.run.vm02.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T09:57:47.325 INFO:teuthology.orchestra.run.vm02.stdout:Device: 6h/6d Inode: 254 Links: 1 Device type: fc,10 2026-03-10T09:57:47.325 INFO:teuthology.orchestra.run.vm02.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T09:57:47.325 INFO:teuthology.orchestra.run.vm02.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T09:57:47.325 INFO:teuthology.orchestra.run.vm02.stdout:Access: 2026-03-10 09:57:27.781649797 +0000 2026-03-10T09:57:47.325 INFO:teuthology.orchestra.run.vm02.stdout:Modify: 2026-03-10 09:54:20.568961656 +0000 2026-03-10T09:57:47.325 INFO:teuthology.orchestra.run.vm02.stdout:Change: 2026-03-10 09:54:20.568961656 +0000 2026-03-10T09:57:47.325 INFO:teuthology.orchestra.run.vm02.stdout: Birth: 2026-03-10 09:50:30.287000000 +0000 2026-03-10T09:57:47.326 DEBUG:teuthology.orchestra.run.vm02:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-10T09:57:47.399 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:47 vm02 ceph-mon[54811]: mon.b calling monitor election 2026-03-10T09:57:47.399 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:47 vm02 ceph-mon[54811]: from='client.14205 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm01=a;vm02=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:57:47.399 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:47 vm02 ceph-mon[54811]: Saving service mgr spec with placement vm01=a;vm02=b;count:2 2026-03-10T09:57:47.399 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:47 vm02 ceph-mon[54811]: Deploying daemon mgr.b on vm02 2026-03-10T09:57:47.399 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:47 vm02 ceph-mon[54811]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:57:47.399 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:47 vm02 ceph-mon[54811]: mon.b calling monitor election 2026-03-10T09:57:47.399 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:47 vm02 ceph-mon[54811]: mon.c calling monitor election 2026-03-10T09:57:47.399 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:47 vm02 ceph-mon[54811]: mon.a calling monitor election 2026-03-10T09:57:47.399 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:47 vm02 ceph-mon[54811]: mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T09:57:47.399 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:47 vm02 ceph-mon[54811]: monmap epoch 3 2026-03-10T09:57:47.399 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:47 vm02 ceph-mon[54811]: fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d 2026-03-10T09:57:47.399 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:47 vm02 ceph-mon[54811]: last_changed 2026-03-10T09:57:39.106959+0000 2026-03-10T09:57:47.399 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:47 vm02 ceph-mon[54811]: created 2026-03-10T09:56:53.660147+0000 2026-03-10T09:57:47.399 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:47 vm02 ceph-mon[54811]: min_mon_release 19 (squid) 2026-03-10T09:57:47.399 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:47 vm02 ceph-mon[54811]: election_strategy: 1 2026-03-10T09:57:47.399 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:47 vm02 ceph-mon[54811]: 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-10T09:57:47.399 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:47 vm02 ceph-mon[54811]: 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-10T09:57:47.399 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:47 vm02 ceph-mon[54811]: 2: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.b 2026-03-10T09:57:47.399 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:47 vm02 ceph-mon[54811]: fsmap 2026-03-10T09:57:47.399 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:47 vm02 ceph-mon[54811]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T09:57:47.399 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:47 vm02 ceph-mon[54811]: mgrmap e13: a(active, since 28s) 2026-03-10T09:57:47.399 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:47 vm02 ceph-mon[54811]: overall HEALTH_OK 2026-03-10T09:57:47.399 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:47 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T09:57:47.399 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:57:47 vm02 podman[56119]: 2026-03-10 09:57:47.145968173 +0000 UTC m=+0.019944091 container create 95c96277a3745f7a3f545fcef91bed7f148b417823e0e9864dff2123c5ba8026 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b, org.opencontainers.image.authors=Ceph Release Team , ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) 2026-03-10T09:57:47.399 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:57:47 vm02 podman[56119]: 2026-03-10 09:57:47.206824932 +0000 UTC m=+0.080800859 container init 95c96277a3745f7a3f545fcef91bed7f148b417823e0e9864dff2123c5ba8026 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, CEPH_REF=squid) 2026-03-10T09:57:47.399 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:57:47 vm02 podman[56119]: 2026-03-10 09:57:47.21276645 +0000 UTC m=+0.086742358 container start 95c96277a3745f7a3f545fcef91bed7f148b417823e0e9864dff2123c5ba8026 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-10T09:57:47.399 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:57:47 vm02 bash[56119]: 95c96277a3745f7a3f545fcef91bed7f148b417823e0e9864dff2123c5ba8026 2026-03-10T09:57:47.399 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:57:47 vm02 podman[56119]: 2026-03-10 09:57:47.138283475 +0000 UTC m=+0.012259403 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T09:57:47.399 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:57:47 vm02 systemd[1]: Started Ceph mgr.b for 4533cc1c-1c67-11f1-85c0-e37e5114407d. 2026-03-10T09:57:47.399 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:57:47 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:57:47.348+0000 7f5d72424140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T09:57:47.399 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:57:47 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:57:47.398+0000 7f5d72424140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T09:57:47.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:47 vm01 ceph-mon[51930]: last_changed 2026-03-10T09:57:39.106959+0000 2026-03-10T09:57:47.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:47 vm01 ceph-mon[51930]: created 2026-03-10T09:56:53.660147+0000 2026-03-10T09:57:47.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:47 vm01 ceph-mon[51930]: min_mon_release 19 (squid) 2026-03-10T09:57:47.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:47 vm01 ceph-mon[51930]: election_strategy: 1 2026-03-10T09:57:47.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:47 vm01 ceph-mon[51930]: 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-10T09:57:47.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:47 vm01 ceph-mon[51930]: 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-10T09:57:47.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:47 vm01 ceph-mon[51930]: 2: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.b 2026-03-10T09:57:47.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:47 vm01 ceph-mon[51930]: fsmap 2026-03-10T09:57:47.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:47 vm01 ceph-mon[51930]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T09:57:47.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:47 vm01 ceph-mon[51930]: mgrmap e13: a(active, since 28s) 2026-03-10T09:57:47.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:47 vm01 ceph-mon[51930]: overall HEALTH_OK 2026-03-10T09:57:47.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:47 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T09:57:47.483 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records in 2026-03-10T09:57:47.483 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records out 2026-03-10T09:57:47.483 INFO:teuthology.orchestra.run.vm02.stderr:512 bytes copied, 0.000169296 s, 3.0 MB/s 2026-03-10T09:57:47.484 DEBUG:teuthology.orchestra.run.vm02:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-10T09:57:47.586 DEBUG:teuthology.orchestra.run.vm02:> stat /dev/vdc 2026-03-10T09:57:47.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:47 vm08 ceph-mon[55477]: mon.b calling monitor election 2026-03-10T09:57:47.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:47 vm08 ceph-mon[55477]: from='client.14205 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm01=a;vm02=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:57:47.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:47 vm08 ceph-mon[55477]: Saving service mgr spec with placement vm01=a;vm02=b;count:2 2026-03-10T09:57:47.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:47 vm08 ceph-mon[55477]: Deploying daemon mgr.b on vm02 2026-03-10T09:57:47.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:47 vm08 ceph-mon[55477]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:57:47.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:47 vm08 ceph-mon[55477]: mon.b calling monitor election 2026-03-10T09:57:47.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:47 vm08 ceph-mon[55477]: mon.c calling monitor election 2026-03-10T09:57:47.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:47 vm08 ceph-mon[55477]: mon.a calling monitor election 2026-03-10T09:57:47.612 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:47 vm08 ceph-mon[55477]: mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T09:57:47.612 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:47 vm08 ceph-mon[55477]: monmap epoch 3 2026-03-10T09:57:47.612 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:47 vm08 ceph-mon[55477]: fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d 2026-03-10T09:57:47.612 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:47 vm08 ceph-mon[55477]: last_changed 2026-03-10T09:57:39.106959+0000 2026-03-10T09:57:47.612 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:47 vm08 ceph-mon[55477]: created 2026-03-10T09:56:53.660147+0000 2026-03-10T09:57:47.612 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:47 vm08 ceph-mon[55477]: min_mon_release 19 (squid) 2026-03-10T09:57:47.612 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:47 vm08 ceph-mon[55477]: election_strategy: 1 2026-03-10T09:57:47.612 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:47 vm08 ceph-mon[55477]: 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-10T09:57:47.612 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:47 vm08 ceph-mon[55477]: 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-10T09:57:47.612 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:47 vm08 ceph-mon[55477]: 2: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.b 2026-03-10T09:57:47.612 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:47 vm08 ceph-mon[55477]: fsmap 2026-03-10T09:57:47.612 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:47 vm08 ceph-mon[55477]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T09:57:47.612 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:47 vm08 ceph-mon[55477]: mgrmap e13: a(active, since 28s) 2026-03-10T09:57:47.612 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:47 vm08 ceph-mon[55477]: overall HEALTH_OK 2026-03-10T09:57:47.612 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:47 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T09:57:47.639 INFO:teuthology.orchestra.run.vm02.stdout: File: /dev/vdc 2026-03-10T09:57:47.639 INFO:teuthology.orchestra.run.vm02.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T09:57:47.639 INFO:teuthology.orchestra.run.vm02.stdout:Device: 6h/6d Inode: 255 Links: 1 Device type: fc,20 2026-03-10T09:57:47.639 INFO:teuthology.orchestra.run.vm02.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T09:57:47.639 INFO:teuthology.orchestra.run.vm02.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T09:57:47.639 INFO:teuthology.orchestra.run.vm02.stdout:Access: 2026-03-10 09:57:27.807649706 +0000 2026-03-10T09:57:47.639 INFO:teuthology.orchestra.run.vm02.stdout:Modify: 2026-03-10 09:54:20.581961662 +0000 2026-03-10T09:57:47.639 INFO:teuthology.orchestra.run.vm02.stdout:Change: 2026-03-10 09:54:20.581961662 +0000 2026-03-10T09:57:47.639 INFO:teuthology.orchestra.run.vm02.stdout: Birth: 2026-03-10 09:50:30.290000000 +0000 2026-03-10T09:57:47.639 DEBUG:teuthology.orchestra.run.vm02:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-10T09:57:47.677 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records in 2026-03-10T09:57:47.678 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records out 2026-03-10T09:57:47.678 INFO:teuthology.orchestra.run.vm02.stderr:512 bytes copied, 0.000230941 s, 2.2 MB/s 2026-03-10T09:57:47.679 DEBUG:teuthology.orchestra.run.vm02:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-10T09:57:47.759 DEBUG:teuthology.orchestra.run.vm02:> stat /dev/vdd 2026-03-10T09:57:47.833 INFO:teuthology.orchestra.run.vm02.stdout: File: /dev/vdd 2026-03-10T09:57:47.833 INFO:teuthology.orchestra.run.vm02.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T09:57:47.833 INFO:teuthology.orchestra.run.vm02.stdout:Device: 6h/6d Inode: 256 Links: 1 Device type: fc,30 2026-03-10T09:57:47.833 INFO:teuthology.orchestra.run.vm02.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T09:57:47.833 INFO:teuthology.orchestra.run.vm02.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T09:57:47.833 INFO:teuthology.orchestra.run.vm02.stdout:Access: 2026-03-10 09:57:27.835649609 +0000 2026-03-10T09:57:47.833 INFO:teuthology.orchestra.run.vm02.stdout:Modify: 2026-03-10 09:54:20.599961670 +0000 2026-03-10T09:57:47.833 INFO:teuthology.orchestra.run.vm02.stdout:Change: 2026-03-10 09:54:20.599961670 +0000 2026-03-10T09:57:47.833 INFO:teuthology.orchestra.run.vm02.stdout: Birth: 2026-03-10 09:50:30.295000000 +0000 2026-03-10T09:57:47.833 DEBUG:teuthology.orchestra.run.vm02:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-10T09:57:47.962 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:57:47 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:57:47.901+0000 7f5d72424140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T09:57:47.965 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records in 2026-03-10T09:57:47.965 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records out 2026-03-10T09:57:47.965 INFO:teuthology.orchestra.run.vm02.stderr:512 bytes copied, 0.000164126 s, 3.1 MB/s 2026-03-10T09:57:47.967 DEBUG:teuthology.orchestra.run.vm02:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-10T09:57:47.986 DEBUG:teuthology.orchestra.run.vm02:> stat /dev/vde 2026-03-10T09:57:48.051 INFO:teuthology.orchestra.run.vm02.stdout: File: /dev/vde 2026-03-10T09:57:48.051 INFO:teuthology.orchestra.run.vm02.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T09:57:48.051 INFO:teuthology.orchestra.run.vm02.stdout:Device: 6h/6d Inode: 257 Links: 1 Device type: fc,40 2026-03-10T09:57:48.051 INFO:teuthology.orchestra.run.vm02.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T09:57:48.051 INFO:teuthology.orchestra.run.vm02.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T09:57:48.051 INFO:teuthology.orchestra.run.vm02.stdout:Access: 2026-03-10 09:57:27.867649497 +0000 2026-03-10T09:57:48.051 INFO:teuthology.orchestra.run.vm02.stdout:Modify: 2026-03-10 09:54:20.572961658 +0000 2026-03-10T09:57:48.051 INFO:teuthology.orchestra.run.vm02.stdout:Change: 2026-03-10 09:54:20.572961658 +0000 2026-03-10T09:57:48.052 INFO:teuthology.orchestra.run.vm02.stdout: Birth: 2026-03-10 09:50:30.327000000 +0000 2026-03-10T09:57:48.052 DEBUG:teuthology.orchestra.run.vm02:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-10T09:57:48.119 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records in 2026-03-10T09:57:48.119 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records out 2026-03-10T09:57:48.119 INFO:teuthology.orchestra.run.vm02.stderr:512 bytes copied, 0.000247443 s, 2.1 MB/s 2026-03-10T09:57:48.121 DEBUG:teuthology.orchestra.run.vm02:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-10T09:57:48.182 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T09:57:48.183 DEBUG:teuthology.orchestra.run.vm08:> dd if=/scratch_devs of=/dev/stdout 2026-03-10T09:57:48.200 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T09:57:48.200 DEBUG:teuthology.orchestra.run.vm08:> ls /dev/[sv]d? 2026-03-10T09:57:48.259 INFO:teuthology.orchestra.run.vm08.stdout:/dev/vda 2026-03-10T09:57:48.259 INFO:teuthology.orchestra.run.vm08.stdout:/dev/vdb 2026-03-10T09:57:48.259 INFO:teuthology.orchestra.run.vm08.stdout:/dev/vdc 2026-03-10T09:57:48.259 INFO:teuthology.orchestra.run.vm08.stdout:/dev/vdd 2026-03-10T09:57:48.259 INFO:teuthology.orchestra.run.vm08.stdout:/dev/vde 2026-03-10T09:57:48.259 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-10T09:57:48.259 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-10T09:57:48.259 DEBUG:teuthology.orchestra.run.vm08:> stat /dev/vdb 2026-03-10T09:57:48.319 INFO:teuthology.orchestra.run.vm08.stdout: File: /dev/vdb 2026-03-10T09:57:48.319 INFO:teuthology.orchestra.run.vm08.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T09:57:48.319 INFO:teuthology.orchestra.run.vm08.stdout:Device: 6h/6d Inode: 221 Links: 1 Device type: fc,10 2026-03-10T09:57:48.319 INFO:teuthology.orchestra.run.vm08.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T09:57:48.319 INFO:teuthology.orchestra.run.vm08.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T09:57:48.319 INFO:teuthology.orchestra.run.vm08.stdout:Access: 2026-03-10 09:57:30.656450790 +0000 2026-03-10T09:57:48.319 INFO:teuthology.orchestra.run.vm08.stdout:Modify: 2026-03-10 09:54:20.480120367 +0000 2026-03-10T09:57:48.319 INFO:teuthology.orchestra.run.vm08.stdout:Change: 2026-03-10 09:54:20.480120367 +0000 2026-03-10T09:57:48.319 INFO:teuthology.orchestra.run.vm08.stdout: Birth: 2026-03-10 09:51:01.214000000 +0000 2026-03-10T09:57:48.319 DEBUG:teuthology.orchestra.run.vm08:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-10T09:57:48.385 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records in 2026-03-10T09:57:48.385 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records out 2026-03-10T09:57:48.386 INFO:teuthology.orchestra.run.vm08.stderr:512 bytes copied, 0.000184926 s, 2.8 MB/s 2026-03-10T09:57:48.387 DEBUG:teuthology.orchestra.run.vm08:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-10T09:57:48.398 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:48 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:48.399 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:48 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:48.399 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:48 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:48.399 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:48 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:48.399 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:48 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:57:48.399 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:48 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:48.399 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:48 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:48.399 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:48 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:57:48.399 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:48 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:57:48.399 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:48 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:48.399 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:48 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.a", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T09:57:48.399 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:48 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T09:57:48.399 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:48 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:57:48.399 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:57:48 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:57:48.110+0000 7fc2089b1640 -1 mgr.server handle_report got status from non-daemon mon.b 2026-03-10T09:57:48.450 DEBUG:teuthology.orchestra.run.vm08:> stat /dev/vdc 2026-03-10T09:57:48.509 INFO:teuthology.orchestra.run.vm08.stdout: File: /dev/vdc 2026-03-10T09:57:48.509 INFO:teuthology.orchestra.run.vm08.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T09:57:48.509 INFO:teuthology.orchestra.run.vm08.stdout:Device: 6h/6d Inode: 224 Links: 1 Device type: fc,20 2026-03-10T09:57:48.509 INFO:teuthology.orchestra.run.vm08.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T09:57:48.509 INFO:teuthology.orchestra.run.vm08.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T09:57:48.509 INFO:teuthology.orchestra.run.vm08.stdout:Access: 2026-03-10 09:57:30.688450832 +0000 2026-03-10T09:57:48.509 INFO:teuthology.orchestra.run.vm08.stdout:Modify: 2026-03-10 09:54:20.477120364 +0000 2026-03-10T09:57:48.509 INFO:teuthology.orchestra.run.vm08.stdout:Change: 2026-03-10 09:54:20.477120364 +0000 2026-03-10T09:57:48.509 INFO:teuthology.orchestra.run.vm08.stdout: Birth: 2026-03-10 09:51:01.224000000 +0000 2026-03-10T09:57:48.510 DEBUG:teuthology.orchestra.run.vm08:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-10T09:57:48.572 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:48 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:48.572 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:48 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:48.572 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:48 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:48.572 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:48 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:48.572 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:48 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:57:48.572 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:48 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:48.572 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:48 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:48.572 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:48 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:57:48.572 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:48 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:57:48.572 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:48 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:48.572 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:48 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.a", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T09:57:48.572 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:48 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T09:57:48.572 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:48 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:57:48.574 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records in 2026-03-10T09:57:48.574 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records out 2026-03-10T09:57:48.574 INFO:teuthology.orchestra.run.vm08.stderr:512 bytes copied, 0.00013912 s, 3.7 MB/s 2026-03-10T09:57:48.575 DEBUG:teuthology.orchestra.run.vm08:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-10T09:57:48.634 DEBUG:teuthology.orchestra.run.vm08:> stat /dev/vdd 2026-03-10T09:57:48.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:48 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:48.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:48 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:48.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:48 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:48.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:48 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:48.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:48 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:57:48.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:48 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:48.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:48 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:48.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:48 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:57:48.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:48 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:57:48.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:48 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:48.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:48 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.a", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T09:57:48.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:48 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T09:57:48.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:48 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:57:48.659 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:57:48 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:57:48.255+0000 7f5d72424140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T09:57:48.659 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:57:48 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T09:57:48.659 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:57:48 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T09:57:48.659 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:57:48 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: from numpy import show_config as show_numpy_config 2026-03-10T09:57:48.659 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:57:48 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:57:48.353+0000 7f5d72424140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T09:57:48.659 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:57:48 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:57:48.391+0000 7f5d72424140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T09:57:48.659 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:57:48 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:57:48.466+0000 7f5d72424140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T09:57:48.692 INFO:teuthology.orchestra.run.vm08.stdout: File: /dev/vdd 2026-03-10T09:57:48.692 INFO:teuthology.orchestra.run.vm08.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T09:57:48.692 INFO:teuthology.orchestra.run.vm08.stdout:Device: 6h/6d Inode: 256 Links: 1 Device type: fc,30 2026-03-10T09:57:48.692 INFO:teuthology.orchestra.run.vm08.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T09:57:48.692 INFO:teuthology.orchestra.run.vm08.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T09:57:48.692 INFO:teuthology.orchestra.run.vm08.stdout:Access: 2026-03-10 09:57:30.724450880 +0000 2026-03-10T09:57:48.692 INFO:teuthology.orchestra.run.vm08.stdout:Modify: 2026-03-10 09:54:20.484120371 +0000 2026-03-10T09:57:48.692 INFO:teuthology.orchestra.run.vm08.stdout:Change: 2026-03-10 09:54:20.484120371 +0000 2026-03-10T09:57:48.692 INFO:teuthology.orchestra.run.vm08.stdout: Birth: 2026-03-10 09:51:01.235000000 +0000 2026-03-10T09:57:48.692 DEBUG:teuthology.orchestra.run.vm08:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-10T09:57:48.758 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records in 2026-03-10T09:57:48.758 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records out 2026-03-10T09:57:48.758 INFO:teuthology.orchestra.run.vm08.stderr:512 bytes copied, 0.000212429 s, 2.4 MB/s 2026-03-10T09:57:48.760 DEBUG:teuthology.orchestra.run.vm08:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-10T09:57:48.819 DEBUG:teuthology.orchestra.run.vm08:> stat /dev/vde 2026-03-10T09:57:48.877 INFO:teuthology.orchestra.run.vm08.stdout: File: /dev/vde 2026-03-10T09:57:48.877 INFO:teuthology.orchestra.run.vm08.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T09:57:48.877 INFO:teuthology.orchestra.run.vm08.stdout:Device: 6h/6d Inode: 257 Links: 1 Device type: fc,40 2026-03-10T09:57:48.877 INFO:teuthology.orchestra.run.vm08.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T09:57:48.877 INFO:teuthology.orchestra.run.vm08.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T09:57:48.877 INFO:teuthology.orchestra.run.vm08.stdout:Access: 2026-03-10 09:57:30.750450914 +0000 2026-03-10T09:57:48.877 INFO:teuthology.orchestra.run.vm08.stdout:Modify: 2026-03-10 09:54:20.510120400 +0000 2026-03-10T09:57:48.877 INFO:teuthology.orchestra.run.vm08.stdout:Change: 2026-03-10 09:54:20.510120400 +0000 2026-03-10T09:57:48.877 INFO:teuthology.orchestra.run.vm08.stdout: Birth: 2026-03-10 09:51:01.238000000 +0000 2026-03-10T09:57:48.877 DEBUG:teuthology.orchestra.run.vm08:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-10T09:57:48.948 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records in 2026-03-10T09:57:48.948 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records out 2026-03-10T09:57:48.948 INFO:teuthology.orchestra.run.vm08.stderr:512 bytes copied, 0.000218419 s, 2.3 MB/s 2026-03-10T09:57:48.950 DEBUG:teuthology.orchestra.run.vm08:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-10T09:57:49.007 INFO:tasks.cephadm:Deploying osd.0 on vm01 with /dev/vde... 2026-03-10T09:57:49.007 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d -- lvm zap /dev/vde 2026-03-10T09:57:49.178 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/mon.a/config 2026-03-10T09:57:49.249 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:57:48 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:57:48.977+0000 7f5d72424140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T09:57:49.249 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:57:49 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:57:49.089+0000 7f5d72424140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T09:57:49.249 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:57:49 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:57:49.131+0000 7f5d72424140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T09:57:49.249 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:57:49 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:57:49.167+0000 7f5d72424140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T09:57:49.249 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:57:49 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:57:49.210+0000 7f5d72424140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T09:57:49.549 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:49 vm01 ceph-mon[51930]: Reconfiguring mgr.a (unknown last config time)... 2026-03-10T09:57:49.549 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:49 vm01 ceph-mon[51930]: Reconfiguring daemon mgr.a on vm01 2026-03-10T09:57:49.549 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:49 vm01 ceph-mon[51930]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:57:49.549 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:49 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:49.549 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:49 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:49.549 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:49 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:57:49.550 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:49 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:57:49.550 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:49 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:57:49.550 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:49 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:49.550 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:49 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:49.658 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:49 vm02 ceph-mon[54811]: Reconfiguring mgr.a (unknown last config time)... 2026-03-10T09:57:49.658 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:49 vm02 ceph-mon[54811]: Reconfiguring daemon mgr.a on vm01 2026-03-10T09:57:49.658 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:49 vm02 ceph-mon[54811]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:57:49.658 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:49 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:49.658 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:49 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:49.658 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:49 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:57:49.658 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:49 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:57:49.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:49 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:57:49.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:49 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:49.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:49 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:49.659 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:57:49 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:57:49.248+0000 7f5d72424140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T09:57:49.659 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:57:49 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:57:49.428+0000 7f5d72424140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T09:57:49.659 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:57:49 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:57:49.480+0000 7f5d72424140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T09:57:49.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:49 vm08 ceph-mon[55477]: Reconfiguring mgr.a (unknown last config time)... 2026-03-10T09:57:49.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:49 vm08 ceph-mon[55477]: Reconfiguring daemon mgr.a on vm01 2026-03-10T09:57:49.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:49 vm08 ceph-mon[55477]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:57:49.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:49 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:49.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:49 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:49.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:49 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:57:49.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:49 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:57:49.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:49 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:57:49.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:49 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:49.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:49 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:50.002 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:57:49 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:57:49.718+0000 7f5d72424140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T09:57:50.278 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:57:50.290 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:57:50 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:57:50.002+0000 7f5d72424140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T09:57:50.290 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:57:50 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:57:50.040+0000 7f5d72424140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T09:57:50.290 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:57:50 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:57:50.083+0000 7f5d72424140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T09:57:50.290 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:57:50 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:57:50.161+0000 7f5d72424140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T09:57:50.290 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:57:50 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:57:50.197+0000 7f5d72424140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T09:57:50.290 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:57:50 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:57:50.290+0000 7f5d72424140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T09:57:50.299 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d -- ceph orch daemon add osd vm01:/dev/vde 2026-03-10T09:57:50.474 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/mon.a/config 2026-03-10T09:57:50.551 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:57:50 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:57:50.408+0000 7f5d72424140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T09:57:50.908 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:57:50 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:57:50.551+0000 7f5d72424140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T09:57:50.909 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:57:50 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:57:50.590+0000 7f5d72424140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T09:57:51.518 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:51 vm01 ceph-mon[51930]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:57:51.518 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:51 vm01 ceph-mon[51930]: Standby manager daemon b started 2026-03-10T09:57:51.518 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:51 vm01 ceph-mon[51930]: from='mgr.? 192.168.123.102:0/1614854415' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/crt"}]: dispatch 2026-03-10T09:57:51.518 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:51 vm01 ceph-mon[51930]: from='mgr.? 192.168.123.102:0/1614854415' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T09:57:51.518 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:51 vm01 ceph-mon[51930]: from='mgr.? 192.168.123.102:0/1614854415' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/key"}]: dispatch 2026-03-10T09:57:51.518 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:51 vm01 ceph-mon[51930]: from='mgr.? 192.168.123.102:0/1614854415' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T09:57:51.518 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:51 vm01 ceph-mon[51930]: from='client.14217 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm01:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:57:51.518 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:51 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T09:57:51.518 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:51 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T09:57:51.518 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:51 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:57:51.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:51 vm08 ceph-mon[55477]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:57:51.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:51 vm08 ceph-mon[55477]: Standby manager daemon b started 2026-03-10T09:57:51.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:51 vm08 ceph-mon[55477]: from='mgr.? 192.168.123.102:0/1614854415' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/crt"}]: dispatch 2026-03-10T09:57:51.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:51 vm08 ceph-mon[55477]: from='mgr.? 192.168.123.102:0/1614854415' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T09:57:51.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:51 vm08 ceph-mon[55477]: from='mgr.? 192.168.123.102:0/1614854415' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/key"}]: dispatch 2026-03-10T09:57:51.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:51 vm08 ceph-mon[55477]: from='mgr.? 192.168.123.102:0/1614854415' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T09:57:51.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:51 vm08 ceph-mon[55477]: from='client.14217 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm01:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:57:51.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:51 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T09:57:51.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:51 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T09:57:51.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:51 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:57:51.908 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:51 vm02 ceph-mon[54811]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:57:51.908 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:51 vm02 ceph-mon[54811]: Standby manager daemon b started 2026-03-10T09:57:51.908 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:51 vm02 ceph-mon[54811]: from='mgr.? 192.168.123.102:0/1614854415' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/crt"}]: dispatch 2026-03-10T09:57:51.908 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:51 vm02 ceph-mon[54811]: from='mgr.? 192.168.123.102:0/1614854415' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T09:57:51.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:51 vm02 ceph-mon[54811]: from='mgr.? 192.168.123.102:0/1614854415' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/key"}]: dispatch 2026-03-10T09:57:51.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:51 vm02 ceph-mon[54811]: from='mgr.? 192.168.123.102:0/1614854415' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T09:57:51.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:51 vm02 ceph-mon[54811]: from='client.14217 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm01:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:57:51.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:51 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T09:57:51.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:51 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T09:57:51.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:51 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:57:52.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:52 vm08 ceph-mon[55477]: mgrmap e14: a(active, since 33s), standbys: b 2026-03-10T09:57:52.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:52 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "b", "id": "b"}]: dispatch 2026-03-10T09:57:52.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:52 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/665643031' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "99f5a193-89a2-4291-8218-884df42c1152"}]: dispatch 2026-03-10T09:57:52.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:52 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/665643031' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "99f5a193-89a2-4291-8218-884df42c1152"}]': finished 2026-03-10T09:57:52.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:52 vm08 ceph-mon[55477]: osdmap e5: 1 total, 0 up, 1 in 2026-03-10T09:57:52.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:52 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:57:52.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:52 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/68689482' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T09:57:52.908 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:52 vm02 ceph-mon[54811]: mgrmap e14: a(active, since 33s), standbys: b 2026-03-10T09:57:52.908 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:52 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "b", "id": "b"}]: dispatch 2026-03-10T09:57:52.908 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:52 vm02 ceph-mon[54811]: from='client.? 192.168.123.101:0/665643031' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "99f5a193-89a2-4291-8218-884df42c1152"}]: dispatch 2026-03-10T09:57:52.908 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:52 vm02 ceph-mon[54811]: from='client.? 192.168.123.101:0/665643031' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "99f5a193-89a2-4291-8218-884df42c1152"}]': finished 2026-03-10T09:57:52.908 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:52 vm02 ceph-mon[54811]: osdmap e5: 1 total, 0 up, 1 in 2026-03-10T09:57:52.908 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:52 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:57:52.908 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:52 vm02 ceph-mon[54811]: from='client.? 192.168.123.101:0/68689482' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T09:57:52.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:52 vm01 ceph-mon[51930]: mgrmap e14: a(active, since 33s), standbys: b 2026-03-10T09:57:52.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:52 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "b", "id": "b"}]: dispatch 2026-03-10T09:57:52.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:52 vm01 ceph-mon[51930]: from='client.? 192.168.123.101:0/665643031' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "99f5a193-89a2-4291-8218-884df42c1152"}]: dispatch 2026-03-10T09:57:52.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:52 vm01 ceph-mon[51930]: from='client.? 192.168.123.101:0/665643031' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "99f5a193-89a2-4291-8218-884df42c1152"}]': finished 2026-03-10T09:57:52.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:52 vm01 ceph-mon[51930]: osdmap e5: 1 total, 0 up, 1 in 2026-03-10T09:57:52.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:52 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:57:52.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:52 vm01 ceph-mon[51930]: from='client.? 192.168.123.101:0/68689482' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T09:57:54.111 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:53 vm08 ceph-mon[55477]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:57:54.158 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:53 vm02 ceph-mon[54811]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:57:54.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:53 vm01 ceph-mon[51930]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:57:55.857 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:55 vm01 ceph-mon[51930]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:57:56.111 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:55 vm08 ceph-mon[55477]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:57:56.158 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:55 vm02 ceph-mon[54811]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:57:56.984 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:56 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T09:57:56.984 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:56 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:57:57.111 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:56 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T09:57:57.111 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:56 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:57:57.158 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:56 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T09:57:57.158 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:56 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:57:58.026 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:57 vm01 ceph-mon[51930]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:57:58.027 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:57 vm01 ceph-mon[51930]: Deploying daemon osd.0 on vm01 2026-03-10T09:57:58.110 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:57 vm08 ceph-mon[55477]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:57:58.111 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:57 vm08 ceph-mon[55477]: Deploying daemon osd.0 on vm01 2026-03-10T09:57:58.158 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:57 vm02 ceph-mon[54811]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:57:58.158 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:57 vm02 ceph-mon[54811]: Deploying daemon osd.0 on vm01 2026-03-10T09:57:58.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:58 vm01 ceph-mon[51930]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:57:58.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:58 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:57:58.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:58 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:58.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:57:58 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:59.111 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:58 vm08 ceph-mon[55477]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:57:59.111 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:58 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:57:59.111 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:58 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:59.111 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:57:58 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:59.158 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:58 vm02 ceph-mon[54811]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:57:59.158 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:58 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:57:59.158 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:58 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:59.158 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:57:58 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:57:59.979 INFO:teuthology.orchestra.run.vm01.stdout:Created osd(s) 0 on host 'vm01' 2026-03-10T09:58:00.145 DEBUG:teuthology.orchestra.run.vm01:osd.0> sudo journalctl -f -n 0 -u ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d@osd.0.service 2026-03-10T09:58:00.151 INFO:tasks.cephadm:Deploying osd.1 on vm02 with /dev/vde... 2026-03-10T09:58:00.151 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d -- lvm zap /dev/vde 2026-03-10T09:58:00.329 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/mon.b/config 2026-03-10T09:58:00.386 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:00 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:00.386 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:00 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:00.386 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:00 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:00.386 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:00 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:58:00.386 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:00 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:00.386 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:00 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:58:00.386 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:00 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:00.386 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:00 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:00.424 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:00 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:00.424 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:00 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:00.424 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:00 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:00.424 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:00 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:58:00.424 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:00 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:00.424 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:00 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:58:00.424 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:00 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:00.424 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:00 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:00.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:00 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:00.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:00 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:00.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:00 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:00.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:00 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:58:00.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:00 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:00.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:00 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:58:00.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:00 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:00.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:00 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:00.679 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 09:58:00 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-osd-0[64932]: 2026-03-10T09:58:00.385+0000 7fc567afd740 -1 osd.0 0 log_to_monitors true 2026-03-10T09:58:01.396 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:58:01.410 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d -- ceph orch daemon add osd vm02:/dev/vde 2026-03-10T09:58:01.577 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/mon.b/config 2026-03-10T09:58:01.604 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:01 vm02 ceph-mon[54811]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:58:01.604 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:01 vm02 ceph-mon[54811]: from='osd.0 [v2:192.168.123.101:6802/3325110780,v1:192.168.123.101:6803/3325110780]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T09:58:01.604 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:01 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:01.604 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:01 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:01.604 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:01 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-10T09:58:01.604 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:01 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:01.604 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:01 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:58:01.605 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:01 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:01.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:01 vm08 ceph-mon[55477]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:58:01.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:01 vm08 ceph-mon[55477]: from='osd.0 [v2:192.168.123.101:6802/3325110780,v1:192.168.123.101:6803/3325110780]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T09:58:01.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:01 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:01.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:01 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:01.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:01 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-10T09:58:01.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:01 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:01.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:01 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:58:01.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:01 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:01.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:01 vm01 ceph-mon[51930]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:58:01.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:01 vm01 ceph-mon[51930]: from='osd.0 [v2:192.168.123.101:6802/3325110780,v1:192.168.123.101:6803/3325110780]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T09:58:01.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:01 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:01.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:01 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:01.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:01 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-10T09:58:01.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:01 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:01.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:01 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:58:01.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:01 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:02.567 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:02 vm02 ceph-mon[54811]: Detected new or changed devices on vm01 2026-03-10T09:58:02.567 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:02 vm02 ceph-mon[54811]: Adjusting osd_memory_target on vm01 to 257.0M 2026-03-10T09:58:02.567 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:02 vm02 ceph-mon[54811]: Unable to set osd_memory_target on vm01 to 269530726: error parsing value: Value '269530726' is below minimum 939524096 2026-03-10T09:58:02.567 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:02 vm02 ceph-mon[54811]: from='osd.0 [v2:192.168.123.101:6802/3325110780,v1:192.168.123.101:6803/3325110780]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T09:58:02.567 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:02 vm02 ceph-mon[54811]: osdmap e6: 1 total, 0 up, 1 in 2026-03-10T09:58:02.567 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:02 vm02 ceph-mon[54811]: from='osd.0 [v2:192.168.123.101:6802/3325110780,v1:192.168.123.101:6803/3325110780]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-10T09:58:02.567 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:02 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:58:02.567 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:02 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T09:58:02.567 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:02 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T09:58:02.567 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:02 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:02.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:02 vm08 ceph-mon[55477]: Detected new or changed devices on vm01 2026-03-10T09:58:02.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:02 vm08 ceph-mon[55477]: Adjusting osd_memory_target on vm01 to 257.0M 2026-03-10T09:58:02.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:02 vm08 ceph-mon[55477]: Unable to set osd_memory_target on vm01 to 269530726: error parsing value: Value '269530726' is below minimum 939524096 2026-03-10T09:58:02.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:02 vm08 ceph-mon[55477]: from='osd.0 [v2:192.168.123.101:6802/3325110780,v1:192.168.123.101:6803/3325110780]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T09:58:02.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:02 vm08 ceph-mon[55477]: osdmap e6: 1 total, 0 up, 1 in 2026-03-10T09:58:02.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:02 vm08 ceph-mon[55477]: from='osd.0 [v2:192.168.123.101:6802/3325110780,v1:192.168.123.101:6803/3325110780]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-10T09:58:02.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:02 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:58:02.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:02 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T09:58:02.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:02 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T09:58:02.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:02 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:02.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:02 vm01 ceph-mon[51930]: Detected new or changed devices on vm01 2026-03-10T09:58:02.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:02 vm01 ceph-mon[51930]: Adjusting osd_memory_target on vm01 to 257.0M 2026-03-10T09:58:02.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:02 vm01 ceph-mon[51930]: Unable to set osd_memory_target on vm01 to 269530726: error parsing value: Value '269530726' is below minimum 939524096 2026-03-10T09:58:02.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:02 vm01 ceph-mon[51930]: from='osd.0 [v2:192.168.123.101:6802/3325110780,v1:192.168.123.101:6803/3325110780]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T09:58:02.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:02 vm01 ceph-mon[51930]: osdmap e6: 1 total, 0 up, 1 in 2026-03-10T09:58:02.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:02 vm01 ceph-mon[51930]: from='osd.0 [v2:192.168.123.101:6802/3325110780,v1:192.168.123.101:6803/3325110780]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-10T09:58:02.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:02 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:58:02.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:02 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T09:58:02.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:02 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T09:58:02.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:02 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:03.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:03 vm08 ceph-mon[55477]: from='client.24119 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:58:03.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:03 vm08 ceph-mon[55477]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:58:03.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:03 vm08 ceph-mon[55477]: from='osd.0 [v2:192.168.123.101:6802/3325110780,v1:192.168.123.101:6803/3325110780]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm01", "root=default"]}]': finished 2026-03-10T09:58:03.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:03 vm08 ceph-mon[55477]: osdmap e7: 1 total, 0 up, 1 in 2026-03-10T09:58:03.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:03 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:58:03.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:03 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:58:03.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:03 vm08 ceph-mon[55477]: from='client.? 192.168.123.102:0/1567464857' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9ae59e53-b925-4fd6-9e0b-3a01ba1d1990"}]: dispatch 2026-03-10T09:58:03.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:03 vm08 ceph-mon[55477]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9ae59e53-b925-4fd6-9e0b-3a01ba1d1990"}]: dispatch 2026-03-10T09:58:03.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:03 vm08 ceph-mon[55477]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "9ae59e53-b925-4fd6-9e0b-3a01ba1d1990"}]': finished 2026-03-10T09:58:03.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:03 vm08 ceph-mon[55477]: osdmap e8: 2 total, 0 up, 2 in 2026-03-10T09:58:03.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:03 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:58:03.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:03 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:58:03.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:03 vm08 ceph-mon[55477]: from='client.? 192.168.123.102:0/3192522147' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T09:58:03.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:03 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:58:03.612 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:03 vm08 ceph-mon[55477]: from='osd.0 [v2:192.168.123.101:6802/3325110780,v1:192.168.123.101:6803/3325110780]' entity='osd.0' 2026-03-10T09:58:03.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:03 vm02 ceph-mon[54811]: from='client.24119 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:58:03.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:03 vm02 ceph-mon[54811]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:58:03.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:03 vm02 ceph-mon[54811]: from='osd.0 [v2:192.168.123.101:6802/3325110780,v1:192.168.123.101:6803/3325110780]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm01", "root=default"]}]': finished 2026-03-10T09:58:03.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:03 vm02 ceph-mon[54811]: osdmap e7: 1 total, 0 up, 1 in 2026-03-10T09:58:03.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:03 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:58:03.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:03 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:58:03.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:03 vm02 ceph-mon[54811]: from='client.? 192.168.123.102:0/1567464857' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9ae59e53-b925-4fd6-9e0b-3a01ba1d1990"}]: dispatch 2026-03-10T09:58:03.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:03 vm02 ceph-mon[54811]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9ae59e53-b925-4fd6-9e0b-3a01ba1d1990"}]: dispatch 2026-03-10T09:58:03.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:03 vm02 ceph-mon[54811]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "9ae59e53-b925-4fd6-9e0b-3a01ba1d1990"}]': finished 2026-03-10T09:58:03.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:03 vm02 ceph-mon[54811]: osdmap e8: 2 total, 0 up, 2 in 2026-03-10T09:58:03.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:03 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:58:03.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:03 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:58:03.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:03 vm02 ceph-mon[54811]: from='client.? 192.168.123.102:0/3192522147' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T09:58:03.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:03 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:58:03.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:03 vm02 ceph-mon[54811]: from='osd.0 [v2:192.168.123.101:6802/3325110780,v1:192.168.123.101:6803/3325110780]' entity='osd.0' 2026-03-10T09:58:03.679 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 09:58:03 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-osd-0[64932]: 2026-03-10T09:58:03.305+0000 7fc563a7e640 -1 osd.0 0 waiting for initial osdmap 2026-03-10T09:58:03.679 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 09:58:03 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-osd-0[64932]: 2026-03-10T09:58:03.313+0000 7fc55f0a7640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T09:58:03.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:03 vm01 ceph-mon[51930]: from='client.24119 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:58:03.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:03 vm01 ceph-mon[51930]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:58:03.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:03 vm01 ceph-mon[51930]: from='osd.0 [v2:192.168.123.101:6802/3325110780,v1:192.168.123.101:6803/3325110780]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm01", "root=default"]}]': finished 2026-03-10T09:58:03.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:03 vm01 ceph-mon[51930]: osdmap e7: 1 total, 0 up, 1 in 2026-03-10T09:58:03.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:03 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:58:03.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:03 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:58:03.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:03 vm01 ceph-mon[51930]: from='client.? 192.168.123.102:0/1567464857' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9ae59e53-b925-4fd6-9e0b-3a01ba1d1990"}]: dispatch 2026-03-10T09:58:03.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:03 vm01 ceph-mon[51930]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9ae59e53-b925-4fd6-9e0b-3a01ba1d1990"}]: dispatch 2026-03-10T09:58:03.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:03 vm01 ceph-mon[51930]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "9ae59e53-b925-4fd6-9e0b-3a01ba1d1990"}]': finished 2026-03-10T09:58:03.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:03 vm01 ceph-mon[51930]: osdmap e8: 2 total, 0 up, 2 in 2026-03-10T09:58:03.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:03 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:58:03.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:03 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:58:03.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:03 vm01 ceph-mon[51930]: from='client.? 192.168.123.102:0/3192522147' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T09:58:03.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:03 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:58:03.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:03 vm01 ceph-mon[51930]: from='osd.0 [v2:192.168.123.101:6802/3325110780,v1:192.168.123.101:6803/3325110780]' entity='osd.0' 2026-03-10T09:58:04.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:04 vm08 ceph-mon[55477]: purged_snaps scrub starts 2026-03-10T09:58:04.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:04 vm08 ceph-mon[55477]: purged_snaps scrub ok 2026-03-10T09:58:04.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:04 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:58:04.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:04 vm02 ceph-mon[54811]: purged_snaps scrub starts 2026-03-10T09:58:04.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:04 vm02 ceph-mon[54811]: purged_snaps scrub ok 2026-03-10T09:58:04.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:04 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:58:04.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:04 vm01 ceph-mon[51930]: purged_snaps scrub starts 2026-03-10T09:58:04.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:04 vm01 ceph-mon[51930]: purged_snaps scrub ok 2026-03-10T09:58:04.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:04 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:58:05.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:05 vm08 ceph-mon[55477]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:58:05.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:05 vm08 ceph-mon[55477]: osd.0 [v2:192.168.123.101:6802/3325110780,v1:192.168.123.101:6803/3325110780] boot 2026-03-10T09:58:05.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:05 vm08 ceph-mon[55477]: osdmap e9: 2 total, 1 up, 2 in 2026-03-10T09:58:05.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:05 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:58:05.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:05 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:58:05.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:05 vm02 ceph-mon[54811]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:58:05.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:05 vm02 ceph-mon[54811]: osd.0 [v2:192.168.123.101:6802/3325110780,v1:192.168.123.101:6803/3325110780] boot 2026-03-10T09:58:05.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:05 vm02 ceph-mon[54811]: osdmap e9: 2 total, 1 up, 2 in 2026-03-10T09:58:05.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:05 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:58:05.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:05 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:58:05.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:05 vm01 ceph-mon[51930]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:58:05.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:05 vm01 ceph-mon[51930]: osd.0 [v2:192.168.123.101:6802/3325110780,v1:192.168.123.101:6803/3325110780] boot 2026-03-10T09:58:05.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:05 vm01 ceph-mon[51930]: osdmap e9: 2 total, 1 up, 2 in 2026-03-10T09:58:05.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:05 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:58:05.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:05 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:58:06.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:06 vm08 ceph-mon[55477]: osdmap e10: 2 total, 1 up, 2 in 2026-03-10T09:58:06.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:06 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:58:06.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:06 vm02 ceph-mon[54811]: osdmap e10: 2 total, 1 up, 2 in 2026-03-10T09:58:06.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:06 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:58:06.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:06 vm01 ceph-mon[51930]: osdmap e10: 2 total, 1 up, 2 in 2026-03-10T09:58:06.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:06 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:58:07.551 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:07 vm02 ceph-mon[54811]: pgmap v24: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T09:58:07.551 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:07 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T09:58:07.551 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:07 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:07.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:07 vm08 ceph-mon[55477]: pgmap v24: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T09:58:07.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:07 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T09:58:07.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:07 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:07.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:07 vm01 ceph-mon[51930]: pgmap v24: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T09:58:07.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:07 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T09:58:07.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:07 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:08.361 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:08 vm02 ceph-mon[54811]: Deploying daemon osd.1 on vm02 2026-03-10T09:58:08.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:08 vm08 ceph-mon[55477]: Deploying daemon osd.1 on vm02 2026-03-10T09:58:08.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:08 vm01 ceph-mon[51930]: Deploying daemon osd.1 on vm02 2026-03-10T09:58:09.557 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:09 vm02 ceph-mon[54811]: pgmap v25: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T09:58:09.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:09 vm01 ceph-mon[51930]: pgmap v25: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T09:58:09.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:09 vm08 ceph-mon[55477]: pgmap v25: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T09:58:10.411 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:10 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:58:10.411 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:10 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:10.411 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:10 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:10.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:10 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:58:10.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:10 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:10.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:10 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:10.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:10 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:58:10.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:10 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:10.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:10 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:11.196 INFO:teuthology.orchestra.run.vm02.stdout:Created osd(s) 1 on host 'vm02' 2026-03-10T09:58:11.360 DEBUG:teuthology.orchestra.run.vm02:osd.1> sudo journalctl -f -n 0 -u ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d@osd.1.service 2026-03-10T09:58:11.361 INFO:tasks.cephadm:Deploying osd.2 on vm08 with /dev/vde... 2026-03-10T09:58:11.361 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d -- lvm zap /dev/vde 2026-03-10T09:58:11.542 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/mon.c/config 2026-03-10T09:58:11.658 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:11 vm02 ceph-mon[54811]: pgmap v26: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T09:58:11.658 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:11 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:11.658 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:11 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:11.658 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:11 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:11.658 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:11 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:58:11.658 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:11 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:11.658 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:11 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:58:11.658 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:11 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:11.658 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:11 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:11.660 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:11 vm08 ceph-mon[55477]: pgmap v26: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T09:58:11.660 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:11 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:11.660 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:11 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:11.660 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:11 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:11.660 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:11 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:58:11.660 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:11 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:11.660 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:11 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:58:11.660 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:11 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:11.660 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:11 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:11.910 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 09:58:11 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-osd-1[59068]: 2026-03-10T09:58:11.785+0000 7f13c4677740 -1 osd.1 0 log_to_monitors true 2026-03-10T09:58:11.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:11 vm01 ceph-mon[51930]: pgmap v26: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T09:58:11.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:11 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:11.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:11 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:11.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:11 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:11.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:11 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:58:11.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:11 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:11.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:11 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:58:11.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:11 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:11.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:11 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:12.596 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:58:12.615 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d -- ceph orch daemon add osd vm08:/dev/vde 2026-03-10T09:58:12.638 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:12 vm08 ceph-mon[55477]: from='osd.1 [v2:192.168.123.102:6800/2601636124,v1:192.168.123.102:6801/2601636124]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T09:58:12.638 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:12 vm08 ceph-mon[55477]: from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T09:58:12.638 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:12 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:12.638 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:12 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:12.638 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:12 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-10T09:58:12.638 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:12 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:12.638 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:12 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:58:12.638 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:12 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:12.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:12 vm02 ceph-mon[54811]: from='osd.1 [v2:192.168.123.102:6800/2601636124,v1:192.168.123.102:6801/2601636124]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T09:58:12.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:12 vm02 ceph-mon[54811]: from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T09:58:12.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:12 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:12.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:12 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:12.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:12 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-10T09:58:12.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:12 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:12.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:12 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:58:12.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:12 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:12.787 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/mon.c/config 2026-03-10T09:58:12.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:12 vm01 ceph-mon[51930]: from='osd.1 [v2:192.168.123.102:6800/2601636124,v1:192.168.123.102:6801/2601636124]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T09:58:12.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:12 vm01 ceph-mon[51930]: from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T09:58:12.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:12 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:12.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:12 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:12.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:12 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-10T09:58:12.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:12 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:12.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:12 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:58:12.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:12 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:13.520 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:13 vm08 ceph-mon[55477]: pgmap v27: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T09:58:13.520 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:13 vm08 ceph-mon[55477]: Detected new or changed devices on vm02 2026-03-10T09:58:13.520 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:13 vm08 ceph-mon[55477]: Adjusting osd_memory_target on vm02 to 257.0M 2026-03-10T09:58:13.520 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:13 vm08 ceph-mon[55477]: Unable to set osd_memory_target on vm02 to 269536460: error parsing value: Value '269536460' is below minimum 939524096 2026-03-10T09:58:13.521 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:13 vm08 ceph-mon[55477]: from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T09:58:13.521 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:13 vm08 ceph-mon[55477]: from='osd.1 [v2:192.168.123.102:6800/2601636124,v1:192.168.123.102:6801/2601636124]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T09:58:13.521 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:13 vm08 ceph-mon[55477]: osdmap e11: 2 total, 1 up, 2 in 2026-03-10T09:58:13.521 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:13 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:58:13.521 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:13 vm08 ceph-mon[55477]: from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T09:58:13.521 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:13 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T09:58:13.521 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:13 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T09:58:13.521 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:13 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:13.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:13 vm02 ceph-mon[54811]: pgmap v27: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T09:58:13.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:13 vm02 ceph-mon[54811]: Detected new or changed devices on vm02 2026-03-10T09:58:13.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:13 vm02 ceph-mon[54811]: Adjusting osd_memory_target on vm02 to 257.0M 2026-03-10T09:58:13.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:13 vm02 ceph-mon[54811]: Unable to set osd_memory_target on vm02 to 269536460: error parsing value: Value '269536460' is below minimum 939524096 2026-03-10T09:58:13.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:13 vm02 ceph-mon[54811]: from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T09:58:13.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:13 vm02 ceph-mon[54811]: from='osd.1 [v2:192.168.123.102:6800/2601636124,v1:192.168.123.102:6801/2601636124]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T09:58:13.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:13 vm02 ceph-mon[54811]: osdmap e11: 2 total, 1 up, 2 in 2026-03-10T09:58:13.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:13 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:58:13.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:13 vm02 ceph-mon[54811]: from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T09:58:13.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:13 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T09:58:13.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:13 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T09:58:13.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:13 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:13.909 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 09:58:13 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-osd-1[59068]: 2026-03-10T09:58:13.492+0000 7f13c05f8640 -1 osd.1 0 waiting for initial osdmap 2026-03-10T09:58:13.909 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 09:58:13 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-osd-1[59068]: 2026-03-10T09:58:13.496+0000 7f13bc422640 -1 osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T09:58:13.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:13 vm01 ceph-mon[51930]: pgmap v27: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T09:58:13.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:13 vm01 ceph-mon[51930]: Detected new or changed devices on vm02 2026-03-10T09:58:13.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:13 vm01 ceph-mon[51930]: Adjusting osd_memory_target on vm02 to 257.0M 2026-03-10T09:58:13.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:13 vm01 ceph-mon[51930]: Unable to set osd_memory_target on vm02 to 269536460: error parsing value: Value '269536460' is below minimum 939524096 2026-03-10T09:58:13.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:13 vm01 ceph-mon[51930]: from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T09:58:13.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:13 vm01 ceph-mon[51930]: from='osd.1 [v2:192.168.123.102:6800/2601636124,v1:192.168.123.102:6801/2601636124]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T09:58:13.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:13 vm01 ceph-mon[51930]: osdmap e11: 2 total, 1 up, 2 in 2026-03-10T09:58:13.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:13 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:58:13.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:13 vm01 ceph-mon[51930]: from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T09:58:13.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:13 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T09:58:13.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:13 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T09:58:13.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:13 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:14.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:14 vm08 ceph-mon[55477]: from='client.24139 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm08:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:58:14.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:14 vm08 ceph-mon[55477]: from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-10T09:58:14.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:14 vm08 ceph-mon[55477]: osdmap e12: 2 total, 1 up, 2 in 2026-03-10T09:58:14.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:14 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:58:14.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:14 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:58:14.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:14 vm08 ceph-mon[55477]: from='client.? 192.168.123.108:0/560273292' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c372ae37-7d96-4cb3-9d8e-8451dcea6556"}]: dispatch 2026-03-10T09:58:14.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:14 vm08 ceph-mon[55477]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c372ae37-7d96-4cb3-9d8e-8451dcea6556"}]: dispatch 2026-03-10T09:58:14.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:14 vm08 ceph-mon[55477]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c372ae37-7d96-4cb3-9d8e-8451dcea6556"}]': finished 2026-03-10T09:58:14.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:14 vm08 ceph-mon[55477]: osd.1 [v2:192.168.123.102:6800/2601636124,v1:192.168.123.102:6801/2601636124] boot 2026-03-10T09:58:14.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:14 vm08 ceph-mon[55477]: osdmap e13: 3 total, 2 up, 3 in 2026-03-10T09:58:14.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:14 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:58:14.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:14 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:58:14.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:14 vm08 ceph-mon[55477]: from='client.? 192.168.123.108:0/79375194' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T09:58:14.908 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:14 vm02 ceph-mon[54811]: from='client.24139 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm08:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:58:14.908 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:14 vm02 ceph-mon[54811]: from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-10T09:58:14.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:14 vm02 ceph-mon[54811]: osdmap e12: 2 total, 1 up, 2 in 2026-03-10T09:58:14.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:14 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:58:14.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:14 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:58:14.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:14 vm02 ceph-mon[54811]: from='client.? 192.168.123.108:0/560273292' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c372ae37-7d96-4cb3-9d8e-8451dcea6556"}]: dispatch 2026-03-10T09:58:14.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:14 vm02 ceph-mon[54811]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c372ae37-7d96-4cb3-9d8e-8451dcea6556"}]: dispatch 2026-03-10T09:58:14.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:14 vm02 ceph-mon[54811]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c372ae37-7d96-4cb3-9d8e-8451dcea6556"}]': finished 2026-03-10T09:58:14.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:14 vm02 ceph-mon[54811]: osd.1 [v2:192.168.123.102:6800/2601636124,v1:192.168.123.102:6801/2601636124] boot 2026-03-10T09:58:14.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:14 vm02 ceph-mon[54811]: osdmap e13: 3 total, 2 up, 3 in 2026-03-10T09:58:14.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:14 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:58:14.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:14 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:58:14.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:14 vm02 ceph-mon[54811]: from='client.? 192.168.123.108:0/79375194' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T09:58:14.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:14 vm01 ceph-mon[51930]: from='client.24139 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm08:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:58:14.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:14 vm01 ceph-mon[51930]: from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-10T09:58:14.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:14 vm01 ceph-mon[51930]: osdmap e12: 2 total, 1 up, 2 in 2026-03-10T09:58:14.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:14 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:58:14.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:14 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:58:14.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:14 vm01 ceph-mon[51930]: from='client.? 192.168.123.108:0/560273292' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c372ae37-7d96-4cb3-9d8e-8451dcea6556"}]: dispatch 2026-03-10T09:58:14.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:14 vm01 ceph-mon[51930]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c372ae37-7d96-4cb3-9d8e-8451dcea6556"}]: dispatch 2026-03-10T09:58:14.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:14 vm01 ceph-mon[51930]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c372ae37-7d96-4cb3-9d8e-8451dcea6556"}]': finished 2026-03-10T09:58:14.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:14 vm01 ceph-mon[51930]: osd.1 [v2:192.168.123.102:6800/2601636124,v1:192.168.123.102:6801/2601636124] boot 2026-03-10T09:58:14.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:14 vm01 ceph-mon[51930]: osdmap e13: 3 total, 2 up, 3 in 2026-03-10T09:58:14.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:14 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:58:14.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:14 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:58:14.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:14 vm01 ceph-mon[51930]: from='client.? 192.168.123.108:0/79375194' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T09:58:15.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:15 vm08 ceph-mon[55477]: purged_snaps scrub starts 2026-03-10T09:58:15.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:15 vm08 ceph-mon[55477]: purged_snaps scrub ok 2026-03-10T09:58:15.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:15 vm08 ceph-mon[55477]: pgmap v31: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T09:58:15.908 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:15 vm02 ceph-mon[54811]: purged_snaps scrub starts 2026-03-10T09:58:15.908 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:15 vm02 ceph-mon[54811]: purged_snaps scrub ok 2026-03-10T09:58:15.908 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:15 vm02 ceph-mon[54811]: pgmap v31: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T09:58:15.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:15 vm01 ceph-mon[51930]: purged_snaps scrub starts 2026-03-10T09:58:15.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:15 vm01 ceph-mon[51930]: purged_snaps scrub ok 2026-03-10T09:58:15.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:15 vm01 ceph-mon[51930]: pgmap v31: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T09:58:16.856 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:16 vm08 ceph-mon[55477]: osdmap e14: 3 total, 2 up, 3 in 2026-03-10T09:58:16.856 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:16 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:58:16.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:16 vm02 ceph-mon[54811]: osdmap e14: 3 total, 2 up, 3 in 2026-03-10T09:58:16.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:16 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:58:16.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:16 vm01 ceph-mon[51930]: osdmap e14: 3 total, 2 up, 3 in 2026-03-10T09:58:16.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:16 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:58:17.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:17 vm08 ceph-mon[55477]: pgmap v33: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T09:58:17.908 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:17 vm02 ceph-mon[54811]: pgmap v33: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T09:58:17.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:17 vm01 ceph-mon[51930]: pgmap v33: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T09:58:18.685 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:18 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T09:58:18.686 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:18 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:18.908 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:18 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T09:58:18.908 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:18 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:18.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:18 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T09:58:18.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:18 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:19.604 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:19 vm08 ceph-mon[55477]: Deploying daemon osd.2 on vm08 2026-03-10T09:58:19.605 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:19 vm08 ceph-mon[55477]: pgmap v34: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T09:58:19.908 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:19 vm02 ceph-mon[54811]: Deploying daemon osd.2 on vm08 2026-03-10T09:58:19.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:19 vm02 ceph-mon[54811]: pgmap v34: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T09:58:19.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:19 vm01 ceph-mon[51930]: Deploying daemon osd.2 on vm08 2026-03-10T09:58:19.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:19 vm01 ceph-mon[51930]: pgmap v34: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T09:58:21.791 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:21 vm08 ceph-mon[55477]: pgmap v35: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T09:58:21.791 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:21 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:58:21.791 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:21 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:21.791 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:21 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:21.791 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:21 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:21.791 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:21 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:21.791 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:21 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:21.791 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:21 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:58:21.791 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:21 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:21.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:21 vm02 ceph-mon[54811]: pgmap v35: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T09:58:21.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:21 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:58:21.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:21 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:21.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:21 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:21.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:21 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:21.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:21 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:21.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:21 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:21.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:21 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:58:21.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:21 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:21.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:21 vm01 ceph-mon[51930]: pgmap v35: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T09:58:21.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:21 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:58:21.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:21 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:21.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:21 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:21.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:21 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:21.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:21 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:21.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:21 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:21.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:21 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:58:21.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:21 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:21.950 INFO:teuthology.orchestra.run.vm08.stdout:Created osd(s) 2 on host 'vm08' 2026-03-10T09:58:22.120 DEBUG:teuthology.orchestra.run.vm08:osd.2> sudo journalctl -f -n 0 -u ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d@osd.2.service 2026-03-10T09:58:22.163 INFO:tasks.cephadm:Waiting for 3 OSDs to come up... 2026-03-10T09:58:22.163 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d -- ceph osd stat -f json 2026-03-10T09:58:22.336 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/mon.a/config 2026-03-10T09:58:22.597 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:58:22.830 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:22 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:58:22.830 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:22 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:22.830 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:22 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:22.830 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:22 vm01 ceph-mon[51930]: from='osd.2 [v2:192.168.123.108:6800/1630299362,v1:192.168.123.108:6801/1630299362]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T09:58:22.830 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:22 vm01 ceph-mon[51930]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T09:58:22.860 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":14,"num_osds":3,"num_up_osds":2,"osd_up_since":1773136693,"num_in_osds":3,"osd_in_since":1773136693,"num_remapped_pgs":0} 2026-03-10T09:58:22.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:22 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:58:22.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:22 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:22.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:22 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:22.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:22 vm08 ceph-mon[55477]: from='osd.2 [v2:192.168.123.108:6800/1630299362,v1:192.168.123.108:6801/1630299362]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T09:58:22.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:22 vm08 ceph-mon[55477]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T09:58:22.908 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:22 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:58:22.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:22 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:22.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:22 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:22.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:22 vm02 ceph-mon[54811]: from='osd.2 [v2:192.168.123.108:6800/1630299362,v1:192.168.123.108:6801/1630299362]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T09:58:22.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:22 vm02 ceph-mon[54811]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T09:58:23.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:23 vm08 ceph-mon[55477]: pgmap v36: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T09:58:23.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:23 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/1480933115' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:58:23.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:23 vm08 ceph-mon[55477]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T09:58:23.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:23 vm08 ceph-mon[55477]: osdmap e15: 3 total, 2 up, 3 in 2026-03-10T09:58:23.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:23 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:58:23.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:23 vm08 ceph-mon[55477]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-10T09:58:23.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:23 vm08 ceph-mon[55477]: from='osd.2 [v2:192.168.123.108:6800/1630299362,v1:192.168.123.108:6801/1630299362]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-10T09:58:23.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:23 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:23.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:23 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:23.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:23 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-10T09:58:23.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:23 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:23.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:23 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:23.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:23 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:58:23.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:23 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:23.860 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d -- ceph osd stat -f json 2026-03-10T09:58:23.885 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:23 vm01 ceph-mon[51930]: pgmap v36: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T09:58:23.885 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:23 vm01 ceph-mon[51930]: from='client.? 192.168.123.101:0/1480933115' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:58:23.886 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:23 vm01 ceph-mon[51930]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T09:58:23.886 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:23 vm01 ceph-mon[51930]: osdmap e15: 3 total, 2 up, 3 in 2026-03-10T09:58:23.886 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:23 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:58:23.886 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:23 vm01 ceph-mon[51930]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-10T09:58:23.886 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:23 vm01 ceph-mon[51930]: from='osd.2 [v2:192.168.123.108:6800/1630299362,v1:192.168.123.108:6801/1630299362]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-10T09:58:23.886 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:23 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:23.886 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:23 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:23.886 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:23 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-10T09:58:23.886 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:23 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:23.886 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:23 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:23.886 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:23 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:58:23.886 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:23 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:23.908 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:23 vm02 ceph-mon[54811]: pgmap v36: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T09:58:23.908 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:23 vm02 ceph-mon[54811]: from='client.? 192.168.123.101:0/1480933115' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:58:23.908 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:23 vm02 ceph-mon[54811]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T09:58:23.908 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:23 vm02 ceph-mon[54811]: osdmap e15: 3 total, 2 up, 3 in 2026-03-10T09:58:23.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:23 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:58:23.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:23 vm02 ceph-mon[54811]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-10T09:58:23.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:23 vm02 ceph-mon[54811]: from='osd.2 [v2:192.168.123.108:6800/1630299362,v1:192.168.123.108:6801/1630299362]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-10T09:58:23.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:23 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:23.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:23 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:23.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:23 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-10T09:58:23.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:23 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:23.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:23 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:23.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:23 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:58:23.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:23 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' 2026-03-10T09:58:24.039 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/mon.a/config 2026-03-10T09:58:24.271 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:58:24.452 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":16,"num_osds":3,"num_up_osds":2,"osd_up_since":1773136693,"num_in_osds":3,"osd_in_since":1773136693,"num_remapped_pgs":0} 2026-03-10T09:58:24.532 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 10 09:58:24 vm08 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-osd-2[59550]: 2026-03-10T09:58:24.136+0000 7f4cfe0ee640 -1 osd.2 0 waiting for initial osdmap 2026-03-10T09:58:24.532 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 10 09:58:24 vm08 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-osd-2[59550]: 2026-03-10T09:58:24.140+0000 7f4cf9f18640 -1 osd.2 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T09:58:24.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:24 vm01 ceph-mon[51930]: Detected new or changed devices on vm08 2026-03-10T09:58:24.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:24 vm01 ceph-mon[51930]: Adjusting osd_memory_target on vm08 to 4353M 2026-03-10T09:58:24.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:24 vm01 ceph-mon[51930]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]': finished 2026-03-10T09:58:24.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:24 vm01 ceph-mon[51930]: osdmap e16: 3 total, 2 up, 3 in 2026-03-10T09:58:24.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:24 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:58:24.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:24 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:58:24.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:24 vm01 ceph-mon[51930]: from='client.? 192.168.123.101:0/573549174' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:58:24.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:24 vm08 ceph-mon[55477]: Detected new or changed devices on vm08 2026-03-10T09:58:24.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:24 vm08 ceph-mon[55477]: Adjusting osd_memory_target on vm08 to 4353M 2026-03-10T09:58:24.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:24 vm08 ceph-mon[55477]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]': finished 2026-03-10T09:58:24.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:24 vm08 ceph-mon[55477]: osdmap e16: 3 total, 2 up, 3 in 2026-03-10T09:58:24.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:24 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:58:24.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:24 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:58:24.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:24 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/573549174' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:58:24.908 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:24 vm02 ceph-mon[54811]: Detected new or changed devices on vm08 2026-03-10T09:58:24.908 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:24 vm02 ceph-mon[54811]: Adjusting osd_memory_target on vm08 to 4353M 2026-03-10T09:58:24.908 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:24 vm02 ceph-mon[54811]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]': finished 2026-03-10T09:58:24.908 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:24 vm02 ceph-mon[54811]: osdmap e16: 3 total, 2 up, 3 in 2026-03-10T09:58:24.908 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:24 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:58:24.908 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:24 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:58:24.908 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:24 vm02 ceph-mon[54811]: from='client.? 192.168.123.101:0/573549174' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:58:25.453 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d -- ceph osd stat -f json 2026-03-10T09:58:25.644 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/mon.a/config 2026-03-10T09:58:25.733 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:25 vm01 ceph-mon[51930]: purged_snaps scrub starts 2026-03-10T09:58:25.733 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:25 vm01 ceph-mon[51930]: purged_snaps scrub ok 2026-03-10T09:58:25.733 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:25 vm01 ceph-mon[51930]: pgmap v39: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T09:58:25.733 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:25 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:58:25.733 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:25 vm01 ceph-mon[51930]: osd.2 [v2:192.168.123.108:6800/1630299362,v1:192.168.123.108:6801/1630299362] boot 2026-03-10T09:58:25.733 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:25 vm01 ceph-mon[51930]: osdmap e17: 3 total, 3 up, 3 in 2026-03-10T09:58:25.734 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:25 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:58:25.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:25 vm08 ceph-mon[55477]: purged_snaps scrub starts 2026-03-10T09:58:25.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:25 vm08 ceph-mon[55477]: purged_snaps scrub ok 2026-03-10T09:58:25.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:25 vm08 ceph-mon[55477]: pgmap v39: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T09:58:25.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:25 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:58:25.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:25 vm08 ceph-mon[55477]: osd.2 [v2:192.168.123.108:6800/1630299362,v1:192.168.123.108:6801/1630299362] boot 2026-03-10T09:58:25.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:25 vm08 ceph-mon[55477]: osdmap e17: 3 total, 3 up, 3 in 2026-03-10T09:58:25.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:25 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:58:25.887 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:58:25.908 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:25 vm02 ceph-mon[54811]: purged_snaps scrub starts 2026-03-10T09:58:25.908 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:25 vm02 ceph-mon[54811]: purged_snaps scrub ok 2026-03-10T09:58:25.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:25 vm02 ceph-mon[54811]: pgmap v39: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T09:58:25.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:25 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:58:25.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:25 vm02 ceph-mon[54811]: osd.2 [v2:192.168.123.108:6800/1630299362,v1:192.168.123.108:6801/1630299362] boot 2026-03-10T09:58:25.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:25 vm02 ceph-mon[54811]: osdmap e17: 3 total, 3 up, 3 in 2026-03-10T09:58:25.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:25 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:58:26.060 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":17,"num_osds":3,"num_up_osds":3,"osd_up_since":1773136705,"num_in_osds":3,"osd_in_since":1773136693,"num_remapped_pgs":0} 2026-03-10T09:58:26.060 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d -- ceph osd dump --format=json 2026-03-10T09:58:26.250 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/mon.a/config 2026-03-10T09:58:26.497 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:58:26.497 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":17,"fsid":"4533cc1c-1c67-11f1-85c0-e37e5114407d","created":"2026-03-10T09:56:55.158820+0000","modified":"2026-03-10T09:58:25.131669+0000","last_up_change":"2026-03-10T09:58:25.131669+0000","last_in_change":"2026-03-10T09:58:13.970127+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":8,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":0,"max_osd":3,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[],"osds":[{"osd":0,"uuid":"99f5a193-89a2-4291-8218-884df42c1152","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":9,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6802","nonce":3325110780},{"type":"v1","addr":"192.168.123.101:6803","nonce":3325110780}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6804","nonce":3325110780},{"type":"v1","addr":"192.168.123.101:6805","nonce":3325110780}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6808","nonce":3325110780},{"type":"v1","addr":"192.168.123.101:6809","nonce":3325110780}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6806","nonce":3325110780},{"type":"v1","addr":"192.168.123.101:6807","nonce":3325110780}]},"public_addr":"192.168.123.101:6803/3325110780","cluster_addr":"192.168.123.101:6805/3325110780","heartbeat_back_addr":"192.168.123.101:6809/3325110780","heartbeat_front_addr":"192.168.123.101:6807/3325110780","state":["exists","up"]},{"osd":1,"uuid":"9ae59e53-b925-4fd6-9e0b-3a01ba1d1990","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6800","nonce":2601636124},{"type":"v1","addr":"192.168.123.102:6801","nonce":2601636124}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6802","nonce":2601636124},{"type":"v1","addr":"192.168.123.102:6803","nonce":2601636124}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6806","nonce":2601636124},{"type":"v1","addr":"192.168.123.102:6807","nonce":2601636124}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6804","nonce":2601636124},{"type":"v1","addr":"192.168.123.102:6805","nonce":2601636124}]},"public_addr":"192.168.123.102:6801/2601636124","cluster_addr":"192.168.123.102:6803/2601636124","heartbeat_back_addr":"192.168.123.102:6807/2601636124","heartbeat_front_addr":"192.168.123.102:6805/2601636124","state":["exists","up"]},{"osd":2,"uuid":"c372ae37-7d96-4cb3-9d8e-8451dcea6556","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":17,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6800","nonce":1630299362},{"type":"v1","addr":"192.168.123.108:6801","nonce":1630299362}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6802","nonce":1630299362},{"type":"v1","addr":"192.168.123.108:6803","nonce":1630299362}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6806","nonce":1630299362},{"type":"v1","addr":"192.168.123.108:6807","nonce":1630299362}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6804","nonce":1630299362},{"type":"v1","addr":"192.168.123.108:6805","nonce":1630299362}]},"public_addr":"192.168.123.108:6801/1630299362","cluster_addr":"192.168.123.108:6803/1630299362","heartbeat_back_addr":"192.168.123.108:6807/1630299362","heartbeat_front_addr":"192.168.123.108:6805/1630299362","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T09:58:01.388820+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T09:58:12.810897+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.101:0/1421343949":"2026-03-11T09:57:18.184029+0000","192.168.123.101:0/273753432":"2026-03-11T09:57:18.184029+0000","192.168.123.101:6800/1245046547":"2026-03-11T09:57:18.184029+0000","192.168.123.101:0/1491905342":"2026-03-11T09:57:07.098307+0000","192.168.123.101:0/3645187606":"2026-03-11T09:57:07.098307+0000","192.168.123.101:0/2552697875":"2026-03-11T09:57:07.098307+0000","192.168.123.101:6801/2453741582":"2026-03-11T09:57:07.098307+0000","192.168.123.101:0/2218828015":"2026-03-11T09:57:18.184029+0000","192.168.123.101:6801/1245046547":"2026-03-11T09:57:18.184029+0000","192.168.123.101:6800/2453741582":"2026-03-11T09:57:07.098307+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T09:58:26.672 INFO:tasks.cephadm.ceph_manager.ceph:[] 2026-03-10T09:58:26.672 INFO:tasks.cephadm:Setting up client nodes... 2026-03-10T09:58:26.673 INFO:tasks.ceph:Waiting until ceph daemons up and pgs clean... 2026-03-10T09:58:26.673 INFO:tasks.cephadm.ceph_manager.ceph:waiting for mgr available 2026-03-10T09:58:26.673 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d -- ceph mgr dump --format=json 2026-03-10T09:58:26.848 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/mon.a/config 2026-03-10T09:58:26.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:26 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/2014656670' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:58:26.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:26 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T09:58:26.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:26 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/164912976' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T09:58:26.872 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:26 vm01 ceph-mon[51930]: from='client.? 192.168.123.101:0/2014656670' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:58:26.872 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:26 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T09:58:26.872 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:26 vm01 ceph-mon[51930]: from='client.? 192.168.123.101:0/164912976' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T09:58:26.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:26 vm02 ceph-mon[54811]: from='client.? 192.168.123.101:0/2014656670' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:58:26.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:26 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T09:58:26.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:26 vm02 ceph-mon[54811]: from='client.? 192.168.123.101:0/164912976' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T09:58:27.105 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:58:27.280 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":14,"flags":0,"active_gid":14150,"active_name":"a","active_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6800","nonce":2199106339},{"type":"v1","addr":"192.168.123.101:6801","nonce":2199106339}]},"active_addr":"192.168.123.101:6801/2199106339","active_change":"2026-03-10T09:57:18.184298+0000","active_mgr_features":4540701547738038271,"available":true,"standbys":[{"gid":14211,"name":"b","mgr_features":4540701547738038271,"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}]}],"modules":["cephadm","dashboard","iostat","nfs","restful"],"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}],"services":{"dashboard":"https://192.168.123.101:8443/"},"always_on_modules":{"octopus":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"pacific":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"quincy":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"reef":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"squid":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"]},"force_disabled_modules":{},"last_failure_osd_epoch":3,"active_clients":[{"name":"libcephsqlite","addrvec":[{"type":"v2","addr":"192.168.123.101:0","nonce":3084876190}]},{"name":"rbd_support","addrvec":[{"type":"v2","addr":"192.168.123.101:0","nonce":2515872297}]},{"name":"volumes","addrvec":[{"type":"v2","addr":"192.168.123.101:0","nonce":3805589845}]}]} 2026-03-10T09:58:27.282 INFO:tasks.cephadm.ceph_manager.ceph:mgr available! 2026-03-10T09:58:27.282 INFO:tasks.cephadm.ceph_manager.ceph:waiting for all up 2026-03-10T09:58:27.282 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d -- ceph osd dump --format=json 2026-03-10T09:58:27.463 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/mon.a/config 2026-03-10T09:58:27.739 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:58:27.739 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":19,"fsid":"4533cc1c-1c67-11f1-85c0-e37e5114407d","created":"2026-03-10T09:56:55.158820+0000","modified":"2026-03-10T09:58:27.543520+0000","last_up_change":"2026-03-10T09:58:25.131669+0000","last_in_change":"2026-03-10T09:58:13.970127+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":8,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":3,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T09:58:26.237198+0000","flags":32769,"flags_names":"hashpspool,creating","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"19","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":3,"score_stable":3,"optimal_score":1,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"99f5a193-89a2-4291-8218-884df42c1152","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":9,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6802","nonce":3325110780},{"type":"v1","addr":"192.168.123.101:6803","nonce":3325110780}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6804","nonce":3325110780},{"type":"v1","addr":"192.168.123.101:6805","nonce":3325110780}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6808","nonce":3325110780},{"type":"v1","addr":"192.168.123.101:6809","nonce":3325110780}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6806","nonce":3325110780},{"type":"v1","addr":"192.168.123.101:6807","nonce":3325110780}]},"public_addr":"192.168.123.101:6803/3325110780","cluster_addr":"192.168.123.101:6805/3325110780","heartbeat_back_addr":"192.168.123.101:6809/3325110780","heartbeat_front_addr":"192.168.123.101:6807/3325110780","state":["exists","up"]},{"osd":1,"uuid":"9ae59e53-b925-4fd6-9e0b-3a01ba1d1990","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":18,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6800","nonce":2601636124},{"type":"v1","addr":"192.168.123.102:6801","nonce":2601636124}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6802","nonce":2601636124},{"type":"v1","addr":"192.168.123.102:6803","nonce":2601636124}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6806","nonce":2601636124},{"type":"v1","addr":"192.168.123.102:6807","nonce":2601636124}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6804","nonce":2601636124},{"type":"v1","addr":"192.168.123.102:6805","nonce":2601636124}]},"public_addr":"192.168.123.102:6801/2601636124","cluster_addr":"192.168.123.102:6803/2601636124","heartbeat_back_addr":"192.168.123.102:6807/2601636124","heartbeat_front_addr":"192.168.123.102:6805/2601636124","state":["exists","up"]},{"osd":2,"uuid":"c372ae37-7d96-4cb3-9d8e-8451dcea6556","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":17,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6800","nonce":1630299362},{"type":"v1","addr":"192.168.123.108:6801","nonce":1630299362}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6802","nonce":1630299362},{"type":"v1","addr":"192.168.123.108:6803","nonce":1630299362}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6806","nonce":1630299362},{"type":"v1","addr":"192.168.123.108:6807","nonce":1630299362}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6804","nonce":1630299362},{"type":"v1","addr":"192.168.123.108:6805","nonce":1630299362}]},"public_addr":"192.168.123.108:6801/1630299362","cluster_addr":"192.168.123.108:6803/1630299362","heartbeat_back_addr":"192.168.123.108:6807/1630299362","heartbeat_front_addr":"192.168.123.108:6805/1630299362","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T09:58:01.388820+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T09:58:12.810897+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T09:58:23.091722+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.101:0/1421343949":"2026-03-11T09:57:18.184029+0000","192.168.123.101:0/273753432":"2026-03-11T09:57:18.184029+0000","192.168.123.101:6800/1245046547":"2026-03-11T09:57:18.184029+0000","192.168.123.101:0/1491905342":"2026-03-11T09:57:07.098307+0000","192.168.123.101:0/3645187606":"2026-03-11T09:57:07.098307+0000","192.168.123.101:0/2552697875":"2026-03-11T09:57:07.098307+0000","192.168.123.101:6801/2453741582":"2026-03-11T09:57:07.098307+0000","192.168.123.101:0/2218828015":"2026-03-11T09:57:18.184029+0000","192.168.123.101:6801/1245046547":"2026-03-11T09:57:18.184029+0000","192.168.123.101:6800/2453741582":"2026-03-11T09:57:07.098307+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T09:58:27.740 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:27 vm01 ceph-mon[51930]: pgmap v41: 0 pgs: ; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T09:58:27.740 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:27 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T09:58:27.740 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:27 vm01 ceph-mon[51930]: osdmap e18: 3 total, 3 up, 3 in 2026-03-10T09:58:27.740 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:27 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T09:58:27.740 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:27 vm01 ceph-mon[51930]: from='client.? 192.168.123.101:0/1537768976' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T09:58:27.740 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:27 vm01 sudo[69117]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda 2026-03-10T09:58:27.740 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 09:58:27 vm01 sudo[69113]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vde 2026-03-10T09:58:27.741 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 09:58:27 vm01 sudo[69113]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-10T09:58:27.741 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 09:58:27 vm01 sudo[69113]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-10T09:58:27.741 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 09:58:27 vm01 sudo[69113]: pam_unix(sudo:session): session closed for user root 2026-03-10T09:58:27.861 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 10 09:58:27 vm08 sudo[62569]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vde 2026-03-10T09:58:27.861 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 10 09:58:27 vm08 sudo[62569]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-10T09:58:27.861 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 10 09:58:27 vm08 sudo[62569]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-10T09:58:27.861 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 10 09:58:27 vm08 sudo[62569]: pam_unix(sudo:session): session closed for user root 2026-03-10T09:58:27.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:27 vm08 ceph-mon[55477]: pgmap v41: 0 pgs: ; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T09:58:27.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:27 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T09:58:27.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:27 vm08 ceph-mon[55477]: osdmap e18: 3 total, 3 up, 3 in 2026-03-10T09:58:27.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:27 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T09:58:27.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:27 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/1537768976' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T09:58:27.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:27 vm08 sudo[62573]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda 2026-03-10T09:58:27.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:27 vm08 sudo[62573]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-10T09:58:27.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:27 vm08 sudo[62573]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-10T09:58:27.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:27 vm08 sudo[62573]: pam_unix(sudo:session): session closed for user root 2026-03-10T09:58:27.899 INFO:tasks.cephadm.ceph_manager.ceph:all up! 2026-03-10T09:58:27.899 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d -- ceph osd dump --format=json 2026-03-10T09:58:27.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:27 vm02 ceph-mon[54811]: pgmap v41: 0 pgs: ; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T09:58:27.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:27 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T09:58:27.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:27 vm02 ceph-mon[54811]: osdmap e18: 3 total, 3 up, 3 in 2026-03-10T09:58:27.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:27 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T09:58:27.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:27 vm02 ceph-mon[54811]: from='client.? 192.168.123.101:0/1537768976' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T09:58:27.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:27 vm02 sudo[62420]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda 2026-03-10T09:58:27.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:27 vm02 sudo[62420]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-10T09:58:27.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:27 vm02 sudo[62420]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-10T09:58:27.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:27 vm02 sudo[62420]: pam_unix(sudo:session): session closed for user root 2026-03-10T09:58:27.909 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 09:58:27 vm02 sudo[62416]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vde 2026-03-10T09:58:27.909 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 09:58:27 vm02 sudo[62416]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-10T09:58:27.909 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 09:58:27 vm02 sudo[62416]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-10T09:58:27.909 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 09:58:27 vm02 sudo[62416]: pam_unix(sudo:session): session closed for user root 2026-03-10T09:58:28.062 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/mon.a/config 2026-03-10T09:58:28.090 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:27 vm01 sudo[69117]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-10T09:58:28.090 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:27 vm01 sudo[69117]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-10T09:58:28.090 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:27 vm01 sudo[69117]: pam_unix(sudo:session): session closed for user root 2026-03-10T09:58:28.307 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:58:28.307 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":19,"fsid":"4533cc1c-1c67-11f1-85c0-e37e5114407d","created":"2026-03-10T09:56:55.158820+0000","modified":"2026-03-10T09:58:27.543520+0000","last_up_change":"2026-03-10T09:58:25.131669+0000","last_in_change":"2026-03-10T09:58:13.970127+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":8,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":3,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T09:58:26.237198+0000","flags":32769,"flags_names":"hashpspool,creating","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"19","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":3,"score_stable":3,"optimal_score":1,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"99f5a193-89a2-4291-8218-884df42c1152","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":9,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6802","nonce":3325110780},{"type":"v1","addr":"192.168.123.101:6803","nonce":3325110780}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6804","nonce":3325110780},{"type":"v1","addr":"192.168.123.101:6805","nonce":3325110780}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6808","nonce":3325110780},{"type":"v1","addr":"192.168.123.101:6809","nonce":3325110780}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6806","nonce":3325110780},{"type":"v1","addr":"192.168.123.101:6807","nonce":3325110780}]},"public_addr":"192.168.123.101:6803/3325110780","cluster_addr":"192.168.123.101:6805/3325110780","heartbeat_back_addr":"192.168.123.101:6809/3325110780","heartbeat_front_addr":"192.168.123.101:6807/3325110780","state":["exists","up"]},{"osd":1,"uuid":"9ae59e53-b925-4fd6-9e0b-3a01ba1d1990","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":18,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6800","nonce":2601636124},{"type":"v1","addr":"192.168.123.102:6801","nonce":2601636124}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6802","nonce":2601636124},{"type":"v1","addr":"192.168.123.102:6803","nonce":2601636124}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6806","nonce":2601636124},{"type":"v1","addr":"192.168.123.102:6807","nonce":2601636124}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6804","nonce":2601636124},{"type":"v1","addr":"192.168.123.102:6805","nonce":2601636124}]},"public_addr":"192.168.123.102:6801/2601636124","cluster_addr":"192.168.123.102:6803/2601636124","heartbeat_back_addr":"192.168.123.102:6807/2601636124","heartbeat_front_addr":"192.168.123.102:6805/2601636124","state":["exists","up"]},{"osd":2,"uuid":"c372ae37-7d96-4cb3-9d8e-8451dcea6556","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":17,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6800","nonce":1630299362},{"type":"v1","addr":"192.168.123.108:6801","nonce":1630299362}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6802","nonce":1630299362},{"type":"v1","addr":"192.168.123.108:6803","nonce":1630299362}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6806","nonce":1630299362},{"type":"v1","addr":"192.168.123.108:6807","nonce":1630299362}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6804","nonce":1630299362},{"type":"v1","addr":"192.168.123.108:6805","nonce":1630299362}]},"public_addr":"192.168.123.108:6801/1630299362","cluster_addr":"192.168.123.108:6803/1630299362","heartbeat_back_addr":"192.168.123.108:6807/1630299362","heartbeat_front_addr":"192.168.123.108:6805/1630299362","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T09:58:01.388820+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T09:58:12.810897+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T09:58:23.091722+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.101:0/1421343949":"2026-03-11T09:57:18.184029+0000","192.168.123.101:0/273753432":"2026-03-11T09:57:18.184029+0000","192.168.123.101:6800/1245046547":"2026-03-11T09:57:18.184029+0000","192.168.123.101:0/1491905342":"2026-03-11T09:57:07.098307+0000","192.168.123.101:0/3645187606":"2026-03-11T09:57:07.098307+0000","192.168.123.101:0/2552697875":"2026-03-11T09:57:07.098307+0000","192.168.123.101:6801/2453741582":"2026-03-11T09:57:07.098307+0000","192.168.123.101:0/2218828015":"2026-03-11T09:57:18.184029+0000","192.168.123.101:6801/1245046547":"2026-03-11T09:57:18.184029+0000","192.168.123.101:6800/2453741582":"2026-03-11T09:57:07.098307+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T09:58:28.486 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d -- ceph tell osd.0 flush_pg_stats 2026-03-10T09:58:28.487 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d -- ceph tell osd.1 flush_pg_stats 2026-03-10T09:58:28.487 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d -- ceph tell osd.2 flush_pg_stats 2026-03-10T09:58:28.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:28 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T09:58:28.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:28 vm01 ceph-mon[51930]: osdmap e19: 3 total, 3 up, 3 in 2026-03-10T09:58:28.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:28 vm01 ceph-mon[51930]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T09:58:28.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:28 vm01 ceph-mon[51930]: from='client.? 192.168.123.101:0/2286849313' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T09:58:28.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:28 vm01 ceph-mon[51930]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T09:58:28.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:28 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T09:58:28.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:28 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T09:58:28.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:28 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T09:58:28.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:28 vm01 ceph-mon[51930]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T09:58:28.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:28 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T09:58:28.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:28 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T09:58:28.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:28 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T09:58:28.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:28 vm01 ceph-mon[51930]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T09:58:28.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:28 vm01 ceph-mon[51930]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T09:58:28.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:28 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T09:58:28.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:28 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T09:58:28.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:28 vm01 ceph-mon[51930]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T09:58:28.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:28 vm01 ceph-mon[51930]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T09:58:28.680 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:28 vm01 ceph-mon[51930]: from='client.? 192.168.123.101:0/4073554985' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T09:58:28.772 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/mon.a/config 2026-03-10T09:58:28.822 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/mon.a/config 2026-03-10T09:58:28.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:28 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T09:58:28.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:28 vm08 ceph-mon[55477]: osdmap e19: 3 total, 3 up, 3 in 2026-03-10T09:58:28.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:28 vm08 ceph-mon[55477]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T09:58:28.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:28 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/2286849313' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T09:58:28.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:28 vm08 ceph-mon[55477]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T09:58:28.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:28 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T09:58:28.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:28 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T09:58:28.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:28 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T09:58:28.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:28 vm08 ceph-mon[55477]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T09:58:28.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:28 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T09:58:28.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:28 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T09:58:28.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:28 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T09:58:28.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:28 vm08 ceph-mon[55477]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T09:58:28.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:28 vm08 ceph-mon[55477]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T09:58:28.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:28 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T09:58:28.862 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:28 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T09:58:28.862 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:28 vm08 ceph-mon[55477]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T09:58:28.862 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:28 vm08 ceph-mon[55477]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T09:58:28.862 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:28 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/4073554985' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T09:58:28.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:28 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T09:58:28.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:28 vm02 ceph-mon[54811]: osdmap e19: 3 total, 3 up, 3 in 2026-03-10T09:58:28.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:28 vm02 ceph-mon[54811]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T09:58:28.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:28 vm02 ceph-mon[54811]: from='client.? 192.168.123.101:0/2286849313' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T09:58:28.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:28 vm02 ceph-mon[54811]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T09:58:28.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:28 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T09:58:28.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:28 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T09:58:28.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:28 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T09:58:28.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:28 vm02 ceph-mon[54811]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T09:58:28.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:28 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T09:58:28.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:28 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T09:58:28.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:28 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T09:58:28.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:28 vm02 ceph-mon[54811]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T09:58:28.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:28 vm02 ceph-mon[54811]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T09:58:28.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:28 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T09:58:28.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:28 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T09:58:28.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:28 vm02 ceph-mon[54811]: from='mgr.14150 192.168.123.101:0/778504130' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T09:58:28.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:28 vm02 ceph-mon[54811]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T09:58:28.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:28 vm02 ceph-mon[54811]: from='client.? 192.168.123.101:0/4073554985' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T09:58:28.913 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/mon.a/config 2026-03-10T09:58:29.175 INFO:teuthology.orchestra.run.vm01.stdout:38654705670 2026-03-10T09:58:29.175 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d -- ceph osd last-stat-seq osd.0 2026-03-10T09:58:29.259 INFO:teuthology.orchestra.run.vm01.stdout:55834574852 2026-03-10T09:58:29.259 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d -- ceph osd last-stat-seq osd.1 2026-03-10T09:58:29.336 INFO:teuthology.orchestra.run.vm01.stdout:73014444034 2026-03-10T09:58:29.336 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d -- ceph osd last-stat-seq osd.2 2026-03-10T09:58:29.408 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/mon.a/config 2026-03-10T09:58:29.582 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/mon.a/config 2026-03-10T09:58:29.673 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:29 vm01 ceph-mon[51930]: pgmap v44: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T09:58:29.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:29 vm01 ceph-mon[51930]: mgrmap e15: a(active, since 70s), standbys: b 2026-03-10T09:58:29.674 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:29 vm01 ceph-mon[51930]: osdmap e20: 3 total, 3 up, 3 in 2026-03-10T09:58:29.755 INFO:teuthology.orchestra.run.vm01.stdout:38654705669 2026-03-10T09:58:29.757 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/mon.a/config 2026-03-10T09:58:29.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:29 vm08 ceph-mon[55477]: pgmap v44: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T09:58:29.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:29 vm08 ceph-mon[55477]: mgrmap e15: a(active, since 70s), standbys: b 2026-03-10T09:58:29.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:29 vm08 ceph-mon[55477]: osdmap e20: 3 total, 3 up, 3 in 2026-03-10T09:58:29.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:29 vm02 ceph-mon[54811]: pgmap v44: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T09:58:29.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:29 vm02 ceph-mon[54811]: mgrmap e15: a(active, since 70s), standbys: b 2026-03-10T09:58:29.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:29 vm02 ceph-mon[54811]: osdmap e20: 3 total, 3 up, 3 in 2026-03-10T09:58:29.940 INFO:tasks.cephadm.ceph_manager.ceph:need seq 38654705670 got 38654705669 for osd.0 2026-03-10T09:58:29.944 INFO:teuthology.orchestra.run.vm01.stdout:55834574851 2026-03-10T09:58:30.031 INFO:teuthology.orchestra.run.vm01.stdout:73014444033 2026-03-10T09:58:30.101 INFO:tasks.cephadm.ceph_manager.ceph:need seq 55834574852 got 55834574851 for osd.1 2026-03-10T09:58:30.180 INFO:tasks.cephadm.ceph_manager.ceph:need seq 73014444034 got 73014444033 for osd.2 2026-03-10T09:58:30.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:30 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/2706970050' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T09:58:30.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:30 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/4018924128' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T09:58:30.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:30 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/185971083' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T09:58:30.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:30 vm02 ceph-mon[54811]: from='client.? 192.168.123.101:0/2706970050' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T09:58:30.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:30 vm02 ceph-mon[54811]: from='client.? 192.168.123.101:0/4018924128' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T09:58:30.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:30 vm02 ceph-mon[54811]: from='client.? 192.168.123.101:0/185971083' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T09:58:30.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:30 vm01 ceph-mon[51930]: from='client.? 192.168.123.101:0/2706970050' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T09:58:30.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:30 vm01 ceph-mon[51930]: from='client.? 192.168.123.101:0/4018924128' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T09:58:30.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:30 vm01 ceph-mon[51930]: from='client.? 192.168.123.101:0/185971083' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T09:58:30.940 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d -- ceph osd last-stat-seq osd.0 2026-03-10T09:58:31.102 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d -- ceph osd last-stat-seq osd.1 2026-03-10T09:58:31.122 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/mon.a/config 2026-03-10T09:58:31.181 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d -- ceph osd last-stat-seq osd.2 2026-03-10T09:58:31.392 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/mon.a/config 2026-03-10T09:58:31.399 INFO:teuthology.orchestra.run.vm01.stdout:38654705670 2026-03-10T09:58:31.591 INFO:tasks.cephadm.ceph_manager.ceph:need seq 38654705670 got 38654705670 for osd.0 2026-03-10T09:58:31.592 DEBUG:teuthology.parallel:result is None 2026-03-10T09:58:31.616 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/mon.a/config 2026-03-10T09:58:31.733 INFO:teuthology.orchestra.run.vm01.stdout:55834574853 2026-03-10T09:58:31.743 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:31 vm01 ceph-mon[51930]: pgmap v46: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T09:58:31.743 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:31 vm01 ceph-mon[51930]: from='client.? 192.168.123.101:0/3218502962' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T09:58:31.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:31 vm08 ceph-mon[55477]: pgmap v46: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T09:58:31.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:31 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/3218502962' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T09:58:31.874 INFO:teuthology.orchestra.run.vm01.stdout:73014444034 2026-03-10T09:58:31.908 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:31 vm02 ceph-mon[54811]: pgmap v46: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T09:58:31.908 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:31 vm02 ceph-mon[54811]: from='client.? 192.168.123.101:0/3218502962' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T09:58:31.926 INFO:tasks.cephadm.ceph_manager.ceph:need seq 55834574852 got 55834574853 for osd.1 2026-03-10T09:58:31.926 DEBUG:teuthology.parallel:result is None 2026-03-10T09:58:32.029 INFO:tasks.cephadm.ceph_manager.ceph:need seq 73014444034 got 73014444034 for osd.2 2026-03-10T09:58:32.029 DEBUG:teuthology.parallel:result is None 2026-03-10T09:58:32.029 INFO:tasks.cephadm.ceph_manager.ceph:waiting for clean 2026-03-10T09:58:32.029 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d -- ceph pg dump --format=json 2026-03-10T09:58:32.207 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/mon.a/config 2026-03-10T09:58:32.461 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:58:32.461 INFO:teuthology.orchestra.run.vm01.stderr:dumped all 2026-03-10T09:58:32.649 INFO:teuthology.orchestra.run.vm01.stdout:{"pg_ready":true,"pg_map":{"version":47,"stamp":"2026-03-10T09:58:32.202503+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":3,"num_osds":3,"num_per_pool_osds":3,"num_per_pool_omap_osds":3,"kb":62902272,"kb_used":82772,"kb_used_data":1860,"kb_used_omap":4,"kb_used_meta":80443,"kb_avail":62819500,"statfs":{"total":64411926528,"available":64327168000,"internally_reserved":0,"allocated":1904640,"data_stored":1533900,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":4770,"internal_metadata":82373982},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"2.000288"},"pg_stats":[{"pgid":"1.0","version":"19'32","reported_seq":57,"reported_epoch":20,"state":"active+clean","last_fresh":"2026-03-10T09:58:28.595893+0000","last_change":"2026-03-10T09:58:27.595271+0000","last_active":"2026-03-10T09:58:28.595893+0000","last_peered":"2026-03-10T09:58:28.595893+0000","last_clean":"2026-03-10T09:58:28.595893+0000","last_became_active":"2026-03-10T09:58:27.595118+0000","last_became_peered":"2026-03-10T09:58:27.595118+0000","last_unstale":"2026-03-10T09:58:28.595893+0000","last_undegraded":"2026-03-10T09:58:28.595893+0000","last_fullsized":"2026-03-10T09:58:28.595893+0000","mapping_epoch":18,"log_start":"0'0","ondisk_log_start":"0'0","created":18,"last_epoch_clean":19,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T09:58:26.536264+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T09:58:26.536264+0000","last_clean_scrub_stamp":"2026-03-10T09:58:26.536264+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:18:25.572264+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,0],"acting":[1,2,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]}],"pool_stats":[{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":3}],"osd_stats":[{"osd":2,"up_from":17,"seq":73014444035,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27596,"kb_used_data":620,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939828,"statfs":{"total":21470642176,"available":21442383872,"internally_reserved":0,"allocated":634880,"data_stored":511300,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":13,"seq":55834574853,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27588,"kb_used_data":620,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939836,"statfs":{"total":21470642176,"available":21442392064,"internally_reserved":0,"allocated":634880,"data_stored":511300,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":9,"seq":38654705671,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27588,"kb_used_data":620,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939836,"statfs":{"total":21470642176,"available":21442392064,"internally_reserved":0,"allocated":634880,"data_stored":511300,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[1,2],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-10T09:58:32.649 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d -- ceph pg dump --format=json 2026-03-10T09:58:32.823 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/mon.a/config 2026-03-10T09:58:32.847 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:32 vm01 ceph-mon[51930]: from='client.? 192.168.123.101:0/245180290' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T09:58:32.847 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:32 vm01 ceph-mon[51930]: from='client.? 192.168.123.101:0/3502211215' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T09:58:32.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:32 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/245180290' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T09:58:32.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:32 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/3502211215' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T09:58:32.908 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:32 vm02 ceph-mon[54811]: from='client.? 192.168.123.101:0/245180290' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T09:58:32.908 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:32 vm02 ceph-mon[54811]: from='client.? 192.168.123.101:0/3502211215' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T09:58:33.046 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:58:33.046 INFO:teuthology.orchestra.run.vm01.stderr:dumped all 2026-03-10T09:58:33.220 INFO:teuthology.orchestra.run.vm01.stdout:{"pg_ready":true,"pg_map":{"version":47,"stamp":"2026-03-10T09:58:32.202503+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":3,"num_osds":3,"num_per_pool_osds":3,"num_per_pool_omap_osds":3,"kb":62902272,"kb_used":82772,"kb_used_data":1860,"kb_used_omap":4,"kb_used_meta":80443,"kb_avail":62819500,"statfs":{"total":64411926528,"available":64327168000,"internally_reserved":0,"allocated":1904640,"data_stored":1533900,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":4770,"internal_metadata":82373982},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"2.000288"},"pg_stats":[{"pgid":"1.0","version":"19'32","reported_seq":57,"reported_epoch":20,"state":"active+clean","last_fresh":"2026-03-10T09:58:28.595893+0000","last_change":"2026-03-10T09:58:27.595271+0000","last_active":"2026-03-10T09:58:28.595893+0000","last_peered":"2026-03-10T09:58:28.595893+0000","last_clean":"2026-03-10T09:58:28.595893+0000","last_became_active":"2026-03-10T09:58:27.595118+0000","last_became_peered":"2026-03-10T09:58:27.595118+0000","last_unstale":"2026-03-10T09:58:28.595893+0000","last_undegraded":"2026-03-10T09:58:28.595893+0000","last_fullsized":"2026-03-10T09:58:28.595893+0000","mapping_epoch":18,"log_start":"0'0","ondisk_log_start":"0'0","created":18,"last_epoch_clean":19,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T09:58:26.536264+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T09:58:26.536264+0000","last_clean_scrub_stamp":"2026-03-10T09:58:26.536264+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:18:25.572264+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,0],"acting":[1,2,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]}],"pool_stats":[{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":3}],"osd_stats":[{"osd":2,"up_from":17,"seq":73014444035,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27596,"kb_used_data":620,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939828,"statfs":{"total":21470642176,"available":21442383872,"internally_reserved":0,"allocated":634880,"data_stored":511300,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":13,"seq":55834574853,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27588,"kb_used_data":620,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939836,"statfs":{"total":21470642176,"available":21442392064,"internally_reserved":0,"allocated":634880,"data_stored":511300,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":9,"seq":38654705671,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27588,"kb_used_data":620,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939836,"statfs":{"total":21470642176,"available":21442392064,"internally_reserved":0,"allocated":634880,"data_stored":511300,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[1,2],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-10T09:58:33.220 INFO:tasks.cephadm.ceph_manager.ceph:clean! 2026-03-10T09:58:33.220 INFO:tasks.ceph:Waiting until ceph cluster ceph is healthy... 2026-03-10T09:58:33.220 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy 2026-03-10T09:58:33.220 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d -- ceph health --format=json 2026-03-10T09:58:33.389 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/mon.a/config 2026-03-10T09:58:33.638 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:58:33.638 INFO:teuthology.orchestra.run.vm01.stdout:{"status":"HEALTH_OK","checks":{},"mutes":[]} 2026-03-10T09:58:33.761 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:33 vm01 ceph-mon[51930]: pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T09:58:33.761 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:33 vm01 ceph-mon[51930]: from='client.14388 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T09:58:33.787 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy done 2026-03-10T09:58:33.787 INFO:tasks.cephadm:Setup complete, yielding 2026-03-10T09:58:33.787 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-10T09:58:33.789 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm01.local 2026-03-10T09:58:33.789 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d -- bash -c 'ceph mgr module enable rgw' 2026-03-10T09:58:33.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:33 vm08 ceph-mon[55477]: pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T09:58:33.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:33 vm08 ceph-mon[55477]: from='client.14388 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T09:58:33.908 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:33 vm02 ceph-mon[54811]: pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T09:58:33.908 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:33 vm02 ceph-mon[54811]: from='client.14388 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T09:58:33.954 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/mon.a/config 2026-03-10T09:58:34.769 INFO:teuthology.run_tasks:Running task rgw_module.apply... 2026-03-10T09:58:34.773 INFO:tasks.rgw_module:Applying spec(s): rgw_realm: myrealm1 rgw_zone: myzone1 rgw_zonegroup: myzonegroup1 spec: rgw_frontend_port: 5500 2026-03-10T09:58:34.773 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d -- ceph rgw realm bootstrap -i - 2026-03-10T09:58:34.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:34 vm08 ceph-mon[55477]: from='client.14394 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T09:58:34.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:34 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/2738269520' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T09:58:34.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:34 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/1233357498' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "rgw"}]: dispatch 2026-03-10T09:58:34.908 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:58:34 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: ignoring --setuser ceph since I am not root 2026-03-10T09:58:34.909 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:58:34 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: ignoring --setgroup ceph since I am not root 2026-03-10T09:58:34.909 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:58:34 vm02 ceph-mgr[56180]: -- 192.168.123.102:0/1243619525 <== mon.1 v2:192.168.123.108:3300/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 194+0+0 (secure 0 0 0) 0x557c9e64b4a0 con 0x557c9e628800 2026-03-10T09:58:34.909 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:58:34 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:58:34.746+0000 7f574b181140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T09:58:34.909 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:58:34 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:58:34.794+0000 7f574b181140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T09:58:34.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:34 vm02 ceph-mon[54811]: from='client.14394 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T09:58:34.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:34 vm02 ceph-mon[54811]: from='client.? 192.168.123.101:0/2738269520' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T09:58:34.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:34 vm02 ceph-mon[54811]: from='client.? 192.168.123.101:0/1233357498' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "rgw"}]: dispatch 2026-03-10T09:58:34.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:34 vm01 ceph-mon[51930]: from='client.14394 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T09:58:34.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:34 vm01 ceph-mon[51930]: from='client.? 192.168.123.101:0/2738269520' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T09:58:34.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:34 vm01 ceph-mon[51930]: from='client.? 192.168.123.101:0/1233357498' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "rgw"}]: dispatch 2026-03-10T09:58:34.930 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:58:34 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: ignoring --setuser ceph since I am not root 2026-03-10T09:58:34.930 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:58:34 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: ignoring --setgroup ceph since I am not root 2026-03-10T09:58:34.930 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:58:34 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:58:34.757+0000 7f405a90c140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T09:58:34.930 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:58:34 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:58:34.805+0000 7f405a90c140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T09:58:34.963 INFO:teuthology.orchestra.run.vm01.stdout:Inferring config /var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/mon.a/config 2026-03-10T09:58:35.571 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:58:35 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:58:35.220+0000 7f574b181140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T09:58:35.592 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:58:35 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:58:35.253+0000 7f405a90c140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T09:58:35.592 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:58:35 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:58:35.591+0000 7f405a90c140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T09:58:35.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:35 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/1233357498' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "rgw"}]': finished 2026-03-10T09:58:35.861 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:35 vm08 ceph-mon[55477]: mgrmap e16: a(active, since 76s), standbys: b 2026-03-10T09:58:35.909 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:58:35 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:58:35.571+0000 7f574b181140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T09:58:35.909 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:58:35 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T09:58:35.909 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:58:35 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T09:58:35.909 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:58:35 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: from numpy import show_config as show_numpy_config 2026-03-10T09:58:35.909 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:58:35 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:58:35.664+0000 7f574b181140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T09:58:35.909 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:58:35 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:58:35.702+0000 7f574b181140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T09:58:35.909 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:58:35 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:58:35.775+0000 7f574b181140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T09:58:35.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:35 vm02 ceph-mon[54811]: from='client.? 192.168.123.101:0/1233357498' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "rgw"}]': finished 2026-03-10T09:58:35.909 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:35 vm02 ceph-mon[54811]: mgrmap e16: a(active, since 76s), standbys: b 2026-03-10T09:58:35.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:35 vm01 ceph-mon[51930]: from='client.? 192.168.123.101:0/1233357498' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "rgw"}]': finished 2026-03-10T09:58:35.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:35 vm01 ceph-mon[51930]: mgrmap e16: a(active, since 76s), standbys: b 2026-03-10T09:58:35.929 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:58:35 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T09:58:35.929 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:58:35 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T09:58:35.929 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:58:35 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: from numpy import show_config as show_numpy_config 2026-03-10T09:58:35.929 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:58:35 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:58:35.682+0000 7f405a90c140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T09:58:35.929 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:58:35 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:58:35.720+0000 7f405a90c140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T09:58:35.929 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:58:35 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:58:35.792+0000 7f405a90c140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T09:58:36.569 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:58:36 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:58:36.285+0000 7f574b181140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T09:58:36.570 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:58:36 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:58:36.403+0000 7f574b181140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T09:58:36.570 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:58:36 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:58:36.444+0000 7f574b181140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T09:58:36.570 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:58:36 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:58:36.483+0000 7f574b181140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T09:58:36.570 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:58:36 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:58:36.528+0000 7f574b181140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T09:58:36.581 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:58:36 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:58:36.300+0000 7f405a90c140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T09:58:36.581 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:58:36 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:58:36.416+0000 7f405a90c140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T09:58:36.581 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:58:36 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:58:36.457+0000 7f405a90c140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T09:58:36.581 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:58:36 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:58:36.495+0000 7f405a90c140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T09:58:36.581 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:58:36 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:58:36.539+0000 7f405a90c140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T09:58:36.581 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:58:36 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:58:36.580+0000 7f405a90c140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T09:58:36.908 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:58:36 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:58:36.569+0000 7f574b181140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T09:58:36.908 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:58:36 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:58:36.748+0000 7f574b181140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T09:58:36.908 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:58:36 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:58:36.801+0000 7f574b181140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T09:58:36.929 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:58:36 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:58:36.757+0000 7f405a90c140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T09:58:36.929 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:58:36 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:58:36.809+0000 7f405a90c140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T09:58:37.325 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:58:37 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:58:37.023+0000 7f574b181140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T09:58:37.338 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:58:37 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:58:37.031+0000 7f405a90c140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T09:58:37.624 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:58:37 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:58:37.324+0000 7f574b181140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T09:58:37.624 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:58:37 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:58:37.365+0000 7f574b181140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T09:58:37.624 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:58:37 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:58:37.410+0000 7f574b181140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T09:58:37.624 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:58:37 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:58:37.493+0000 7f574b181140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T09:58:37.624 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:58:37 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:58:37.534+0000 7f574b181140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T09:58:37.643 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:58:37 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:58:37.337+0000 7f405a90c140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T09:58:37.643 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:58:37 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:58:37.382+0000 7f405a90c140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T09:58:37.643 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:58:37 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:58:37.428+0000 7f405a90c140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T09:58:37.644 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:58:37 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:58:37.512+0000 7f405a90c140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T09:58:37.644 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:58:37 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:58:37.553+0000 7f405a90c140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T09:58:37.895 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:58:37 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:58:37.623+0000 7f574b181140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T09:58:37.895 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:58:37 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:58:37.747+0000 7f574b181140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T09:58:37.911 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:58:37 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:58:37.643+0000 7f405a90c140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T09:58:37.912 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:58:37 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:58:37.767+0000 7f405a90c140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T09:58:38.159 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:37 vm02 ceph-mon[54811]: from='mgr.? 192.168.123.102:0/734497931' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/crt"}]: dispatch 2026-03-10T09:58:38.159 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:37 vm02 ceph-mon[54811]: Standby manager daemon b restarted 2026-03-10T09:58:38.159 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:37 vm02 ceph-mon[54811]: Standby manager daemon b started 2026-03-10T09:58:38.159 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:37 vm02 ceph-mon[54811]: from='mgr.? 192.168.123.102:0/734497931' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T09:58:38.159 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:37 vm02 ceph-mon[54811]: from='mgr.? 192.168.123.102:0/734497931' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/key"}]: dispatch 2026-03-10T09:58:38.159 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:37 vm02 ceph-mon[54811]: from='mgr.? 192.168.123.102:0/734497931' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T09:58:38.160 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:37 vm02 ceph-mon[54811]: Active manager daemon a restarted 2026-03-10T09:58:38.160 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:37 vm02 ceph-mon[54811]: Activating manager daemon a 2026-03-10T09:58:38.160 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:37 vm02 ceph-mon[54811]: osdmap e21: 3 total, 3 up, 3 in 2026-03-10T09:58:38.160 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:37 vm02 ceph-mon[54811]: mgrmap e17: a(active, starting, since 0.0117695s), standbys: b 2026-03-10T09:58:38.160 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:37 vm02 ceph-mon[54811]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T09:58:38.160 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:37 vm02 ceph-mon[54811]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T09:58:38.160 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:37 vm02 ceph-mon[54811]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T09:58:38.160 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:37 vm02 ceph-mon[54811]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T09:58:38.160 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:37 vm02 ceph-mon[54811]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "b", "id": "b"}]: dispatch 2026-03-10T09:58:38.160 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:37 vm02 ceph-mon[54811]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:58:38.160 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:37 vm02 ceph-mon[54811]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:58:38.160 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:37 vm02 ceph-mon[54811]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:58:38.160 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:37 vm02 ceph-mon[54811]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T09:58:38.160 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:37 vm02 ceph-mon[54811]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T09:58:38.160 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:37 vm02 ceph-mon[54811]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T09:58:38.160 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:37 vm02 ceph-mon[54811]: Manager daemon a is now available 2026-03-10T09:58:38.160 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:58:37 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:58:37.894+0000 7f574b181140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T09:58:38.160 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:58:37 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b[56129]: 2026-03-10T09:58:37.934+0000 7f574b181140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T09:58:38.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:37 vm01 ceph-mon[51930]: from='mgr.? 192.168.123.102:0/734497931' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/crt"}]: dispatch 2026-03-10T09:58:38.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:37 vm01 ceph-mon[51930]: Standby manager daemon b restarted 2026-03-10T09:58:38.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:37 vm01 ceph-mon[51930]: Standby manager daemon b started 2026-03-10T09:58:38.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:37 vm01 ceph-mon[51930]: from='mgr.? 192.168.123.102:0/734497931' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T09:58:38.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:37 vm01 ceph-mon[51930]: from='mgr.? 192.168.123.102:0/734497931' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/key"}]: dispatch 2026-03-10T09:58:38.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:37 vm01 ceph-mon[51930]: from='mgr.? 192.168.123.102:0/734497931' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T09:58:38.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:37 vm01 ceph-mon[51930]: Active manager daemon a restarted 2026-03-10T09:58:38.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:37 vm01 ceph-mon[51930]: Activating manager daemon a 2026-03-10T09:58:38.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:37 vm01 ceph-mon[51930]: osdmap e21: 3 total, 3 up, 3 in 2026-03-10T09:58:38.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:37 vm01 ceph-mon[51930]: mgrmap e17: a(active, starting, since 0.0117695s), standbys: b 2026-03-10T09:58:38.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:37 vm01 ceph-mon[51930]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T09:58:38.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:37 vm01 ceph-mon[51930]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T09:58:38.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:37 vm01 ceph-mon[51930]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T09:58:38.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:37 vm01 ceph-mon[51930]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T09:58:38.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:37 vm01 ceph-mon[51930]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "b", "id": "b"}]: dispatch 2026-03-10T09:58:38.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:37 vm01 ceph-mon[51930]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:58:38.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:37 vm01 ceph-mon[51930]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:58:38.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:37 vm01 ceph-mon[51930]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:58:38.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:37 vm01 ceph-mon[51930]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T09:58:38.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:37 vm01 ceph-mon[51930]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T09:58:38.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:37 vm01 ceph-mon[51930]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T09:58:38.180 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:37 vm01 ceph-mon[51930]: Manager daemon a is now available 2026-03-10T09:58:38.180 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:58:37 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:58:37.911+0000 7f405a90c140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T09:58:38.180 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:58:37 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a[52164]: 2026-03-10T09:58:37.952+0000 7f405a90c140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T09:58:38.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:37 vm08 ceph-mon[55477]: from='mgr.? 192.168.123.102:0/734497931' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/crt"}]: dispatch 2026-03-10T09:58:38.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:37 vm08 ceph-mon[55477]: Standby manager daemon b restarted 2026-03-10T09:58:38.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:37 vm08 ceph-mon[55477]: Standby manager daemon b started 2026-03-10T09:58:38.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:37 vm08 ceph-mon[55477]: from='mgr.? 192.168.123.102:0/734497931' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T09:58:38.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:37 vm08 ceph-mon[55477]: from='mgr.? 192.168.123.102:0/734497931' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/key"}]: dispatch 2026-03-10T09:58:38.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:37 vm08 ceph-mon[55477]: from='mgr.? 192.168.123.102:0/734497931' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T09:58:38.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:37 vm08 ceph-mon[55477]: Active manager daemon a restarted 2026-03-10T09:58:38.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:37 vm08 ceph-mon[55477]: Activating manager daemon a 2026-03-10T09:58:38.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:37 vm08 ceph-mon[55477]: osdmap e21: 3 total, 3 up, 3 in 2026-03-10T09:58:38.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:37 vm08 ceph-mon[55477]: mgrmap e17: a(active, starting, since 0.0117695s), standbys: b 2026-03-10T09:58:38.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:37 vm08 ceph-mon[55477]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T09:58:38.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:37 vm08 ceph-mon[55477]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T09:58:38.362 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:37 vm08 ceph-mon[55477]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T09:58:38.362 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:37 vm08 ceph-mon[55477]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T09:58:38.362 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:37 vm08 ceph-mon[55477]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "b", "id": "b"}]: dispatch 2026-03-10T09:58:38.362 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:37 vm08 ceph-mon[55477]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:58:38.362 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:37 vm08 ceph-mon[55477]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:58:38.362 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:37 vm08 ceph-mon[55477]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:58:38.362 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:37 vm08 ceph-mon[55477]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T09:58:38.362 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:37 vm08 ceph-mon[55477]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T09:58:38.362 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:37 vm08 ceph-mon[55477]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T09:58:38.362 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:37 vm08 ceph-mon[55477]: Manager daemon a is now available 2026-03-10T09:58:39.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:39 vm08 ceph-mon[55477]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:58:39.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:39 vm08 ceph-mon[55477]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T09:58:39.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:39 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T09:58:39.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:39 vm08 ceph-mon[55477]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T09:58:39.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:39 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T09:58:39.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:39 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:39.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:39 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:39.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:39 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:39.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:39 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:39.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:39 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:39.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:39 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:39.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:39 vm08 ceph-mon[55477]: mgrmap e18: a(active, since 1.02568s), standbys: b 2026-03-10T09:58:39.409 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:39 vm02 ceph-mon[54811]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:58:39.409 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:39 vm02 ceph-mon[54811]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T09:58:39.409 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:39 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T09:58:39.409 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:39 vm02 ceph-mon[54811]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T09:58:39.409 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:39 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T09:58:39.409 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:39 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:39.409 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:39 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:39.409 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:39 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:39.409 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:39 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:39.409 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:39 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:39.409 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:39 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:39.409 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:39 vm02 ceph-mon[54811]: mgrmap e18: a(active, since 1.02568s), standbys: b 2026-03-10T09:58:39.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:39 vm01 ceph-mon[51930]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:58:39.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:39 vm01 ceph-mon[51930]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T09:58:39.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:39 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T09:58:39.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:39 vm01 ceph-mon[51930]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T09:58:39.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:39 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T09:58:39.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:39 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:39.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:39 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:39.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:39 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:39.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:39 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:39.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:39 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:39.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:39 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:39.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:39 vm01 ceph-mon[51930]: mgrmap e18: a(active, since 1.02568s), standbys: b 2026-03-10T09:58:40.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:40 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:40.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:40 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:40.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:40 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:40.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:40 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:40.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:40 vm08 ceph-mon[55477]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-10T09:58:40.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:40 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-10T09:58:40.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:40 vm08 ceph-mon[55477]: Adjusting osd_memory_target on vm02 to 257.0M 2026-03-10T09:58:40.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:40 vm08 ceph-mon[55477]: Unable to set osd_memory_target on vm02 to 269536460: error parsing value: Value '269536460' is below minimum 939524096 2026-03-10T09:58:40.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:40 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:40.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:40 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:40.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:40 vm08 ceph-mon[55477]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-10T09:58:40.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:40 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-10T09:58:40.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:40 vm08 ceph-mon[55477]: Adjusting osd_memory_target on vm01 to 257.0M 2026-03-10T09:58:40.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:40 vm08 ceph-mon[55477]: Unable to set osd_memory_target on vm01 to 269530726: error parsing value: Value '269530726' is below minimum 939524096 2026-03-10T09:58:40.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:40 vm08 ceph-mon[55477]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:40.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:40 vm08 ceph-mon[55477]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:58:40.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:40 vm08 ceph-mon[55477]: Updating vm01:/etc/ceph/ceph.conf 2026-03-10T09:58:40.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:40 vm08 ceph-mon[55477]: Updating vm02:/etc/ceph/ceph.conf 2026-03-10T09:58:40.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:40 vm08 ceph-mon[55477]: Updating vm08:/etc/ceph/ceph.conf 2026-03-10T09:58:40.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:40 vm08 ceph-mon[55477]: [10/Mar/2026:09:58:39] ENGINE Bus STARTING 2026-03-10T09:58:40.362 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:40 vm08 ceph-mon[55477]: Updating vm08:/var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/config/ceph.conf 2026-03-10T09:58:40.362 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:40 vm08 ceph-mon[55477]: Updating vm02:/var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/config/ceph.conf 2026-03-10T09:58:40.362 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:40 vm08 ceph-mon[55477]: [10/Mar/2026:09:58:39] ENGINE Serving on http://192.168.123.101:8765 2026-03-10T09:58:40.362 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:40 vm08 ceph-mon[55477]: Updating vm01:/var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/config/ceph.conf 2026-03-10T09:58:40.362 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:40 vm08 ceph-mon[55477]: [10/Mar/2026:09:58:39] ENGINE Client ('192.168.123.101', 42950) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T09:58:40.362 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:40 vm08 ceph-mon[55477]: [10/Mar/2026:09:58:39] ENGINE Serving on https://192.168.123.101:7150 2026-03-10T09:58:40.362 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:40 vm08 ceph-mon[55477]: [10/Mar/2026:09:58:39] ENGINE Bus STARTED 2026-03-10T09:58:40.362 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:40 vm08 ceph-mon[55477]: Updating vm02:/etc/ceph/ceph.client.admin.keyring 2026-03-10T09:58:40.362 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:40 vm08 ceph-mon[55477]: Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-10T09:58:40.362 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:40 vm08 ceph-mon[55477]: Updating vm01:/etc/ceph/ceph.client.admin.keyring 2026-03-10T09:58:40.362 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:40 vm08 ceph-mon[55477]: Updating vm02:/var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/config/ceph.client.admin.keyring 2026-03-10T09:58:40.362 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:40 vm08 ceph-mon[55477]: Updating vm08:/var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/config/ceph.client.admin.keyring 2026-03-10T09:58:40.362 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:40 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:40.362 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:40 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:40.362 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:40 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:40.362 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:40 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:40.362 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:40 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:40.362 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:40 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:40.362 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:40 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:40.362 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:40 vm08 ceph-mon[55477]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:58:40.362 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:40 vm08 ceph-mon[55477]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:40.362 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:40 vm08 ceph-mon[55477]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:58:40.362 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:40 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:40.409 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:40 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:40.409 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:40 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:40.409 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:40 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:40.409 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:40 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:40.409 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:40 vm02 ceph-mon[54811]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-10T09:58:40.409 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:40 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-10T09:58:40.409 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:40 vm02 ceph-mon[54811]: Adjusting osd_memory_target on vm02 to 257.0M 2026-03-10T09:58:40.409 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:40 vm02 ceph-mon[54811]: Unable to set osd_memory_target on vm02 to 269536460: error parsing value: Value '269536460' is below minimum 939524096 2026-03-10T09:58:40.409 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:40 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:40.409 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:40 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:40.409 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:40 vm02 ceph-mon[54811]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-10T09:58:40.409 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:40 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-10T09:58:40.409 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:40 vm02 ceph-mon[54811]: Adjusting osd_memory_target on vm01 to 257.0M 2026-03-10T09:58:40.409 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:40 vm02 ceph-mon[54811]: Unable to set osd_memory_target on vm01 to 269530726: error parsing value: Value '269530726' is below minimum 939524096 2026-03-10T09:58:40.409 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:40 vm02 ceph-mon[54811]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:40.409 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:40 vm02 ceph-mon[54811]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:58:40.409 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:40 vm02 ceph-mon[54811]: Updating vm01:/etc/ceph/ceph.conf 2026-03-10T09:58:40.410 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:40 vm02 ceph-mon[54811]: Updating vm02:/etc/ceph/ceph.conf 2026-03-10T09:58:40.410 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:40 vm02 ceph-mon[54811]: Updating vm08:/etc/ceph/ceph.conf 2026-03-10T09:58:40.410 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:40 vm02 ceph-mon[54811]: [10/Mar/2026:09:58:39] ENGINE Bus STARTING 2026-03-10T09:58:40.410 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:40 vm02 ceph-mon[54811]: Updating vm08:/var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/config/ceph.conf 2026-03-10T09:58:40.410 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:40 vm02 ceph-mon[54811]: Updating vm02:/var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/config/ceph.conf 2026-03-10T09:58:40.410 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:40 vm02 ceph-mon[54811]: [10/Mar/2026:09:58:39] ENGINE Serving on http://192.168.123.101:8765 2026-03-10T09:58:40.410 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:40 vm02 ceph-mon[54811]: Updating vm01:/var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/config/ceph.conf 2026-03-10T09:58:40.410 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:40 vm02 ceph-mon[54811]: [10/Mar/2026:09:58:39] ENGINE Client ('192.168.123.101', 42950) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T09:58:40.410 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:40 vm02 ceph-mon[54811]: [10/Mar/2026:09:58:39] ENGINE Serving on https://192.168.123.101:7150 2026-03-10T09:58:40.410 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:40 vm02 ceph-mon[54811]: [10/Mar/2026:09:58:39] ENGINE Bus STARTED 2026-03-10T09:58:40.410 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:40 vm02 ceph-mon[54811]: Updating vm02:/etc/ceph/ceph.client.admin.keyring 2026-03-10T09:58:40.410 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:40 vm02 ceph-mon[54811]: Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-10T09:58:40.410 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:40 vm02 ceph-mon[54811]: Updating vm01:/etc/ceph/ceph.client.admin.keyring 2026-03-10T09:58:40.410 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:40 vm02 ceph-mon[54811]: Updating vm02:/var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/config/ceph.client.admin.keyring 2026-03-10T09:58:40.410 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:40 vm02 ceph-mon[54811]: Updating vm08:/var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/config/ceph.client.admin.keyring 2026-03-10T09:58:40.410 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:40 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:40.410 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:40 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:40.410 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:40 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:40.410 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:40 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:40.410 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:40 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:40.410 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:40 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:40.410 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:40 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:40.410 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:40 vm02 ceph-mon[54811]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:58:40.410 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:40 vm02 ceph-mon[54811]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:40.410 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:40 vm02 ceph-mon[54811]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:58:40.410 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:40 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:40.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:40 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:40.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:40 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:40.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:40 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:40.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:40 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:40.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:40 vm01 ceph-mon[51930]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-10T09:58:40.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:40 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-10T09:58:40.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:40 vm01 ceph-mon[51930]: Adjusting osd_memory_target on vm02 to 257.0M 2026-03-10T09:58:40.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:40 vm01 ceph-mon[51930]: Unable to set osd_memory_target on vm02 to 269536460: error parsing value: Value '269536460' is below minimum 939524096 2026-03-10T09:58:40.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:40 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:40.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:40 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:40.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:40 vm01 ceph-mon[51930]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-10T09:58:40.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:40 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-10T09:58:40.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:40 vm01 ceph-mon[51930]: Adjusting osd_memory_target on vm01 to 257.0M 2026-03-10T09:58:40.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:40 vm01 ceph-mon[51930]: Unable to set osd_memory_target on vm01 to 269530726: error parsing value: Value '269530726' is below minimum 939524096 2026-03-10T09:58:40.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:40 vm01 ceph-mon[51930]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:40.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:40 vm01 ceph-mon[51930]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:58:40.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:40 vm01 ceph-mon[51930]: Updating vm01:/etc/ceph/ceph.conf 2026-03-10T09:58:40.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:40 vm01 ceph-mon[51930]: Updating vm02:/etc/ceph/ceph.conf 2026-03-10T09:58:40.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:40 vm01 ceph-mon[51930]: Updating vm08:/etc/ceph/ceph.conf 2026-03-10T09:58:40.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:40 vm01 ceph-mon[51930]: [10/Mar/2026:09:58:39] ENGINE Bus STARTING 2026-03-10T09:58:40.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:40 vm01 ceph-mon[51930]: Updating vm08:/var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/config/ceph.conf 2026-03-10T09:58:40.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:40 vm01 ceph-mon[51930]: Updating vm02:/var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/config/ceph.conf 2026-03-10T09:58:40.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:40 vm01 ceph-mon[51930]: [10/Mar/2026:09:58:39] ENGINE Serving on http://192.168.123.101:8765 2026-03-10T09:58:40.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:40 vm01 ceph-mon[51930]: Updating vm01:/var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/config/ceph.conf 2026-03-10T09:58:40.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:40 vm01 ceph-mon[51930]: [10/Mar/2026:09:58:39] ENGINE Client ('192.168.123.101', 42950) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T09:58:40.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:40 vm01 ceph-mon[51930]: [10/Mar/2026:09:58:39] ENGINE Serving on https://192.168.123.101:7150 2026-03-10T09:58:40.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:40 vm01 ceph-mon[51930]: [10/Mar/2026:09:58:39] ENGINE Bus STARTED 2026-03-10T09:58:40.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:40 vm01 ceph-mon[51930]: Updating vm02:/etc/ceph/ceph.client.admin.keyring 2026-03-10T09:58:40.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:40 vm01 ceph-mon[51930]: Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-10T09:58:40.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:40 vm01 ceph-mon[51930]: Updating vm01:/etc/ceph/ceph.client.admin.keyring 2026-03-10T09:58:40.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:40 vm01 ceph-mon[51930]: Updating vm02:/var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/config/ceph.client.admin.keyring 2026-03-10T09:58:40.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:40 vm01 ceph-mon[51930]: Updating vm08:/var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/config/ceph.client.admin.keyring 2026-03-10T09:58:40.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:40 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:40.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:40 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:40.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:40 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:40.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:40 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:40.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:40 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:40.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:40 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:40.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:40 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:40.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:40 vm01 ceph-mon[51930]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:58:40.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:40 vm01 ceph-mon[51930]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:40.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:40 vm01 ceph-mon[51930]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:58:40.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:40 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:41.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:41 vm01 ceph-mon[51930]: Updating vm01:/var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/config/ceph.client.admin.keyring 2026-03-10T09:58:41.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:41 vm01 ceph-mon[51930]: pgmap v4: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T09:58:41.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:41 vm01 ceph-mon[51930]: osdmap e22: 3 total, 3 up, 3 in 2026-03-10T09:58:41.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:41 vm01 ceph-mon[51930]: from='client.? 192.168.123.101:0/3227824234' entity='mgr.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T09:58:41.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:41 vm01 ceph-mon[51930]: mgrmap e19: a(active, since 2s), standbys: b 2026-03-10T09:58:41.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:41 vm08 ceph-mon[55477]: Updating vm01:/var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/config/ceph.client.admin.keyring 2026-03-10T09:58:41.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:41 vm08 ceph-mon[55477]: pgmap v4: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T09:58:41.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:41 vm08 ceph-mon[55477]: osdmap e22: 3 total, 3 up, 3 in 2026-03-10T09:58:41.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:41 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/3227824234' entity='mgr.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T09:58:41.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:41 vm08 ceph-mon[55477]: mgrmap e19: a(active, since 2s), standbys: b 2026-03-10T09:58:41.658 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:41 vm02 ceph-mon[54811]: Updating vm01:/var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/config/ceph.client.admin.keyring 2026-03-10T09:58:41.658 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:41 vm02 ceph-mon[54811]: pgmap v4: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T09:58:41.658 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:41 vm02 ceph-mon[54811]: osdmap e22: 3 total, 3 up, 3 in 2026-03-10T09:58:41.658 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:41 vm02 ceph-mon[54811]: from='client.? 192.168.123.101:0/3227824234' entity='mgr.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T09:58:41.658 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:41 vm02 ceph-mon[54811]: mgrmap e19: a(active, since 2s), standbys: b 2026-03-10T09:58:42.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:42 vm01 ceph-mon[51930]: from='client.? 192.168.123.101:0/3227824234' entity='mgr.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-10T09:58:42.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:42 vm01 ceph-mon[51930]: osdmap e23: 3 total, 3 up, 3 in 2026-03-10T09:58:42.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:42 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/3227824234' entity='mgr.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-10T09:58:42.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:42 vm08 ceph-mon[55477]: osdmap e23: 3 total, 3 up, 3 in 2026-03-10T09:58:42.658 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:42 vm02 ceph-mon[54811]: from='client.? 192.168.123.101:0/3227824234' entity='mgr.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-10T09:58:42.658 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:42 vm02 ceph-mon[54811]: osdmap e23: 3 total, 3 up, 3 in 2026-03-10T09:58:43.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:43 vm08 ceph-mon[55477]: pgmap v7: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T09:58:43.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:43 vm08 ceph-mon[55477]: osdmap e24: 3 total, 3 up, 3 in 2026-03-10T09:58:43.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:43 vm08 ceph-mon[55477]: mgrmap e20: a(active, since 4s), standbys: b 2026-03-10T09:58:43.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:43 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/3924492745' entity='mgr.a' cmd=[{"prefix": "osd pool application enable","pool": "myzone1.rgw.log","app": "rgw"}]: dispatch 2026-03-10T09:58:43.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:43 vm02 ceph-mon[54811]: pgmap v7: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T09:58:43.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:43 vm02 ceph-mon[54811]: osdmap e24: 3 total, 3 up, 3 in 2026-03-10T09:58:43.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:43 vm02 ceph-mon[54811]: mgrmap e20: a(active, since 4s), standbys: b 2026-03-10T09:58:43.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:43 vm02 ceph-mon[54811]: from='client.? 192.168.123.101:0/3924492745' entity='mgr.a' cmd=[{"prefix": "osd pool application enable","pool": "myzone1.rgw.log","app": "rgw"}]: dispatch 2026-03-10T09:58:43.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:43 vm01 ceph-mon[51930]: pgmap v7: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T09:58:43.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:43 vm01 ceph-mon[51930]: osdmap e24: 3 total, 3 up, 3 in 2026-03-10T09:58:43.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:43 vm01 ceph-mon[51930]: mgrmap e20: a(active, since 4s), standbys: b 2026-03-10T09:58:43.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:43 vm01 ceph-mon[51930]: from='client.? 192.168.123.101:0/3924492745' entity='mgr.a' cmd=[{"prefix": "osd pool application enable","pool": "myzone1.rgw.log","app": "rgw"}]: dispatch 2026-03-10T09:58:44.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:44 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/3924492745' entity='mgr.a' cmd='[{"prefix": "osd pool application enable","pool": "myzone1.rgw.log","app": "rgw"}]': finished 2026-03-10T09:58:44.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:44 vm08 ceph-mon[55477]: osdmap e25: 3 total, 3 up, 3 in 2026-03-10T09:58:44.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:44 vm02 ceph-mon[54811]: from='client.? 192.168.123.101:0/3924492745' entity='mgr.a' cmd='[{"prefix": "osd pool application enable","pool": "myzone1.rgw.log","app": "rgw"}]': finished 2026-03-10T09:58:44.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:44 vm02 ceph-mon[54811]: osdmap e25: 3 total, 3 up, 3 in 2026-03-10T09:58:44.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:44 vm01 ceph-mon[51930]: from='client.? 192.168.123.101:0/3924492745' entity='mgr.a' cmd='[{"prefix": "osd pool application enable","pool": "myzone1.rgw.log","app": "rgw"}]': finished 2026-03-10T09:58:44.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:44 vm01 ceph-mon[51930]: osdmap e25: 3 total, 3 up, 3 in 2026-03-10T09:58:45.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:45 vm08 ceph-mon[55477]: pgmap v10: 65 pgs: 64 unknown, 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T09:58:45.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:45 vm08 ceph-mon[55477]: osdmap e26: 3 total, 3 up, 3 in 2026-03-10T09:58:45.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:45 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/3924492745' entity='mgr.a' cmd=[{"prefix": "osd pool application enable","pool": "myzone1.rgw.control","app": "rgw"}]: dispatch 2026-03-10T09:58:45.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:45 vm02 ceph-mon[54811]: pgmap v10: 65 pgs: 64 unknown, 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T09:58:45.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:45 vm02 ceph-mon[54811]: osdmap e26: 3 total, 3 up, 3 in 2026-03-10T09:58:45.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:45 vm02 ceph-mon[54811]: from='client.? 192.168.123.101:0/3924492745' entity='mgr.a' cmd=[{"prefix": "osd pool application enable","pool": "myzone1.rgw.control","app": "rgw"}]: dispatch 2026-03-10T09:58:45.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:45 vm01 ceph-mon[51930]: pgmap v10: 65 pgs: 64 unknown, 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T09:58:45.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:45 vm01 ceph-mon[51930]: osdmap e26: 3 total, 3 up, 3 in 2026-03-10T09:58:45.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:45 vm01 ceph-mon[51930]: from='client.? 192.168.123.101:0/3924492745' entity='mgr.a' cmd=[{"prefix": "osd pool application enable","pool": "myzone1.rgw.control","app": "rgw"}]: dispatch 2026-03-10T09:58:46.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:46 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/3924492745' entity='mgr.a' cmd='[{"prefix": "osd pool application enable","pool": "myzone1.rgw.control","app": "rgw"}]': finished 2026-03-10T09:58:46.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:46 vm08 ceph-mon[55477]: osdmap e27: 3 total, 3 up, 3 in 2026-03-10T09:58:46.658 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:46 vm02 ceph-mon[54811]: from='client.? 192.168.123.101:0/3924492745' entity='mgr.a' cmd='[{"prefix": "osd pool application enable","pool": "myzone1.rgw.control","app": "rgw"}]': finished 2026-03-10T09:58:46.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:46 vm02 ceph-mon[54811]: osdmap e27: 3 total, 3 up, 3 in 2026-03-10T09:58:46.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:46 vm01 ceph-mon[51930]: from='client.? 192.168.123.101:0/3924492745' entity='mgr.a' cmd='[{"prefix": "osd pool application enable","pool": "myzone1.rgw.control","app": "rgw"}]': finished 2026-03-10T09:58:46.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:46 vm01 ceph-mon[51930]: osdmap e27: 3 total, 3 up, 3 in 2026-03-10T09:58:47.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:47 vm08 ceph-mon[55477]: pgmap v13: 97 pgs: 32 unknown, 65 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 3.5 KiB/s wr, 31 op/s 2026-03-10T09:58:47.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:47 vm08 ceph-mon[55477]: osdmap e28: 3 total, 3 up, 3 in 2026-03-10T09:58:47.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:47 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/3924492745' entity='mgr.a' cmd=[{"prefix": "osd pool application enable","pool": "myzone1.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T09:58:47.658 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:47 vm02 ceph-mon[54811]: pgmap v13: 97 pgs: 32 unknown, 65 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 3.5 KiB/s wr, 31 op/s 2026-03-10T09:58:47.658 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:47 vm02 ceph-mon[54811]: osdmap e28: 3 total, 3 up, 3 in 2026-03-10T09:58:47.658 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:47 vm02 ceph-mon[54811]: from='client.? 192.168.123.101:0/3924492745' entity='mgr.a' cmd=[{"prefix": "osd pool application enable","pool": "myzone1.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T09:58:47.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:47 vm01 ceph-mon[51930]: pgmap v13: 97 pgs: 32 unknown, 65 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 3.5 KiB/s wr, 31 op/s 2026-03-10T09:58:47.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:47 vm01 ceph-mon[51930]: osdmap e28: 3 total, 3 up, 3 in 2026-03-10T09:58:47.679 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:47 vm01 ceph-mon[51930]: from='client.? 192.168.123.101:0/3924492745' entity='mgr.a' cmd=[{"prefix": "osd pool application enable","pool": "myzone1.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T09:58:48.565 INFO:teuthology.orchestra.run.vm01.stdout:Realm(s) created correctly. Please, use 'ceph rgw realm tokens' to get the token. 2026-03-10T09:58:48.565 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:48 vm01 ceph-mon[51930]: from='client.? 192.168.123.101:0/3924492745' entity='mgr.a' cmd='[{"prefix": "osd pool application enable","pool": "myzone1.rgw.meta","app": "rgw"}]': finished 2026-03-10T09:58:48.565 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:48 vm01 ceph-mon[51930]: osdmap e29: 3 total, 3 up, 3 in 2026-03-10T09:58:48.565 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:48 vm01 ceph-mon[51930]: from='client.? 192.168.123.101:0/3924492745' entity='mgr.a' cmd=[{"prefix": "osd pool set", "pool": "myzone1.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T09:58:48.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:48 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/3924492745' entity='mgr.a' cmd='[{"prefix": "osd pool application enable","pool": "myzone1.rgw.meta","app": "rgw"}]': finished 2026-03-10T09:58:48.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:48 vm08 ceph-mon[55477]: osdmap e29: 3 total, 3 up, 3 in 2026-03-10T09:58:48.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:48 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/3924492745' entity='mgr.a' cmd=[{"prefix": "osd pool set", "pool": "myzone1.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T09:58:48.658 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:48 vm02 ceph-mon[54811]: from='client.? 192.168.123.101:0/3924492745' entity='mgr.a' cmd='[{"prefix": "osd pool application enable","pool": "myzone1.rgw.meta","app": "rgw"}]': finished 2026-03-10T09:58:48.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:48 vm02 ceph-mon[54811]: osdmap e29: 3 total, 3 up, 3 in 2026-03-10T09:58:48.659 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:48 vm02 ceph-mon[54811]: from='client.? 192.168.123.101:0/3924492745' entity='mgr.a' cmd=[{"prefix": "osd pool set", "pool": "myzone1.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T09:58:48.723 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-10T09:58:48.726 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm01.local 2026-03-10T09:58:48.726 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d -- bash -c 'set -e 2026-03-10T09:58:48.726 DEBUG:teuthology.orchestra.run.vm01:> set -x 2026-03-10T09:58:48.726 DEBUG:teuthology.orchestra.run.vm01:> while true; do TOKEN=$(ceph rgw realm tokens | jq -r '"'"'.[0].token'"'"'); echo $TOKEN; if [ "$TOKEN" != "master zone has no endpoint" ]; then break; fi; sleep 5; done 2026-03-10T09:58:48.726 DEBUG:teuthology.orchestra.run.vm01:> TOKENS=$(ceph rgw realm tokens) 2026-03-10T09:58:48.726 DEBUG:teuthology.orchestra.run.vm01:> echo $TOKENS | jq --exit-status '"'"'.[0].realm == "myrealm1"'"'"' 2026-03-10T09:58:48.726 DEBUG:teuthology.orchestra.run.vm01:> echo $TOKENS | jq --exit-status '"'"'.[0].token'"'"' 2026-03-10T09:58:48.726 DEBUG:teuthology.orchestra.run.vm01:> TOKEN_JSON=$(ceph rgw realm tokens | jq -r '"'"'.[0].token'"'"' | base64 --decode) 2026-03-10T09:58:48.726 DEBUG:teuthology.orchestra.run.vm01:> echo $TOKEN_JSON | jq --exit-status '"'"'.realm_name == "myrealm1"'"'"' 2026-03-10T09:58:48.726 DEBUG:teuthology.orchestra.run.vm01:> echo $TOKEN_JSON | jq --exit-status '"'"'.endpoint | test("http://.+:\\d+")'"'"' 2026-03-10T09:58:48.726 DEBUG:teuthology.orchestra.run.vm01:> echo $TOKEN_JSON | jq --exit-status '"'"'.realm_id | test("^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$")'"'"' 2026-03-10T09:58:48.726 DEBUG:teuthology.orchestra.run.vm01:> echo $TOKEN_JSON | jq --exit-status '"'"'.access_key'"'"' 2026-03-10T09:58:48.726 DEBUG:teuthology.orchestra.run.vm01:> echo $TOKEN_JSON | jq --exit-status '"'"'.secret'"'"' 2026-03-10T09:58:48.726 DEBUG:teuthology.orchestra.run.vm01:> ' 2026-03-10T09:58:48.922 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/mon.a/config 2026-03-10T09:58:49.015 INFO:teuthology.orchestra.run.vm01.stderr:+ true 2026-03-10T09:58:49.015 INFO:teuthology.orchestra.run.vm01.stderr:++ jq -r '.[0].token' 2026-03-10T09:58:49.017 INFO:teuthology.orchestra.run.vm01.stderr:++ ceph rgw realm tokens 2026-03-10T09:58:49.501 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:49 vm02 ceph-mon[54811]: pgmap v16: 129 pgs: 64 unknown, 65 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 3.5 KiB/s wr, 31 op/s 2026-03-10T09:58:49.501 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:49 vm02 ceph-mon[54811]: from='client.? 192.168.123.101:0/3924492745' entity='mgr.a' cmd='[{"prefix": "osd pool set", "pool": "myzone1.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T09:58:49.501 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:49 vm02 ceph-mon[54811]: osdmap e30: 3 total, 3 up, 3 in 2026-03-10T09:58:49.501 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:49 vm02 ceph-mon[54811]: Saving service rgw.myrealm1.myzone1 spec with placement count:2 2026-03-10T09:58:49.501 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:49 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:49.501 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:49 vm02 ceph-mon[54811]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:58:49.501 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:49 vm02 ceph-mon[54811]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:49.501 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:49 vm02 ceph-mon[54811]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:58:49.501 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:49 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:49.501 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:49 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:49.501 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:49 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:49.501 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:49 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:49.501 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:49 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.myrealm1.myzone1.vm02.absswh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T09:58:49.501 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:49 vm02 ceph-mon[54811]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.myrealm1.myzone1.vm02.absswh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T09:58:49.501 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:49 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.myrealm1.myzone1.vm02.absswh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T09:58:49.502 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:49 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:49.502 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:49 vm02 ceph-mon[54811]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:49.502 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:49 vm02 ceph-mon[54811]: Deploying daemon rgw.myrealm1.myzone1.vm02.absswh on vm02 2026-03-10T09:58:49.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:49 vm08 ceph-mon[55477]: pgmap v16: 129 pgs: 64 unknown, 65 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 3.5 KiB/s wr, 31 op/s 2026-03-10T09:58:49.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:49 vm08 ceph-mon[55477]: from='client.? 192.168.123.101:0/3924492745' entity='mgr.a' cmd='[{"prefix": "osd pool set", "pool": "myzone1.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T09:58:49.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:49 vm08 ceph-mon[55477]: osdmap e30: 3 total, 3 up, 3 in 2026-03-10T09:58:49.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:49 vm08 ceph-mon[55477]: Saving service rgw.myrealm1.myzone1 spec with placement count:2 2026-03-10T09:58:49.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:49 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:49.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:49 vm08 ceph-mon[55477]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:58:49.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:49 vm08 ceph-mon[55477]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:49.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:49 vm08 ceph-mon[55477]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:58:49.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:49 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:49.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:49 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:49.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:49 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:49.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:49 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:49.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:49 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.myrealm1.myzone1.vm02.absswh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T09:58:49.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:49 vm08 ceph-mon[55477]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.myrealm1.myzone1.vm02.absswh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T09:58:49.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:49 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.myrealm1.myzone1.vm02.absswh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T09:58:49.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:49 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:49.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:49 vm08 ceph-mon[55477]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:49.611 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:49 vm08 ceph-mon[55477]: Deploying daemon rgw.myrealm1.myzone1.vm02.absswh on vm02 2026-03-10T09:58:49.617 INFO:teuthology.orchestra.run.vm01.stderr:+ TOKEN='master zone has no endpoint' 2026-03-10T09:58:49.617 INFO:teuthology.orchestra.run.vm01.stderr:+ echo master zone has no endpoint 2026-03-10T09:58:49.618 INFO:teuthology.orchestra.run.vm01.stdout:master zone has no endpoint 2026-03-10T09:58:49.618 INFO:teuthology.orchestra.run.vm01.stderr:+ '[' 'master zone has no endpoint' '!=' 'master zone has no endpoint' ']' 2026-03-10T09:58:49.618 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:49 vm01 ceph-mon[51930]: pgmap v16: 129 pgs: 64 unknown, 65 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 3.5 KiB/s wr, 31 op/s 2026-03-10T09:58:49.618 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:49 vm01 ceph-mon[51930]: from='client.? 192.168.123.101:0/3924492745' entity='mgr.a' cmd='[{"prefix": "osd pool set", "pool": "myzone1.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T09:58:49.618 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:49 vm01 ceph-mon[51930]: osdmap e30: 3 total, 3 up, 3 in 2026-03-10T09:58:49.618 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:49 vm01 ceph-mon[51930]: Saving service rgw.myrealm1.myzone1 spec with placement count:2 2026-03-10T09:58:49.618 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:49 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:49.618 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:49 vm01 ceph-mon[51930]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:58:49.618 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:49 vm01 ceph-mon[51930]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:49.618 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:49 vm01 ceph-mon[51930]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:58:49.618 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:49 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:49.618 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:49 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:49.618 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:49 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:49.618 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:49 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:49.618 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:49 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.myrealm1.myzone1.vm02.absswh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T09:58:49.618 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:49 vm01 ceph-mon[51930]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.myrealm1.myzone1.vm02.absswh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T09:58:49.618 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:49 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.myrealm1.myzone1.vm02.absswh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T09:58:49.618 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:49 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:49.618 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:49 vm01 ceph-mon[51930]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:49.618 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:49 vm01 ceph-mon[51930]: Deploying daemon rgw.myrealm1.myzone1.vm02.absswh on vm02 2026-03-10T09:58:49.618 INFO:teuthology.orchestra.run.vm01.stderr:+ sleep 5 2026-03-10T09:58:50.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:50 vm01 ceph-mon[51930]: from='client.24370 -' entity='client.admin' cmd=[{"prefix": "rgw realm tokens", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:58:50.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:50 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:50.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:50 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:50.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:50 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:50.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:50 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.myrealm1.myzone1.vm08.xyptdc", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T09:58:50.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:50 vm01 ceph-mon[51930]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.myrealm1.myzone1.vm08.xyptdc", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T09:58:50.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:50 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.myrealm1.myzone1.vm08.xyptdc", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T09:58:50.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:50 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:50.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:50 vm01 ceph-mon[51930]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:50.929 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:50 vm01 ceph-mon[51930]: Deploying daemon rgw.myrealm1.myzone1.vm08.xyptdc on vm08 2026-03-10T09:58:50.994 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:50 vm08 ceph-mon[55477]: from='client.24370 -' entity='client.admin' cmd=[{"prefix": "rgw realm tokens", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:58:50.994 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:50 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:50.994 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:50 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:50.994 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:50 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:50.994 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:50 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.myrealm1.myzone1.vm08.xyptdc", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T09:58:50.994 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:50 vm08 ceph-mon[55477]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.myrealm1.myzone1.vm08.xyptdc", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T09:58:50.994 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:50 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.myrealm1.myzone1.vm08.xyptdc", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T09:58:50.994 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:50 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:50.994 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:50 vm08 ceph-mon[55477]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:50.994 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:50 vm08 ceph-mon[55477]: Deploying daemon rgw.myrealm1.myzone1.vm08.xyptdc on vm08 2026-03-10T09:58:51.159 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:50 vm02 ceph-mon[54811]: from='client.24370 -' entity='client.admin' cmd=[{"prefix": "rgw realm tokens", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:58:51.159 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:50 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:51.159 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:50 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:51.159 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:50 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:51.159 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:50 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.myrealm1.myzone1.vm08.xyptdc", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T09:58:51.159 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:50 vm02 ceph-mon[54811]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.myrealm1.myzone1.vm08.xyptdc", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T09:58:51.159 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:50 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.myrealm1.myzone1.vm08.xyptdc", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T09:58:51.159 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:50 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:51.159 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:50 vm02 ceph-mon[54811]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:51.159 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:50 vm02 ceph-mon[54811]: Deploying daemon rgw.myrealm1.myzone1.vm08.xyptdc on vm08 2026-03-10T09:58:52.111 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:51 vm08 ceph-mon[55477]: pgmap v18: 129 pgs: 129 active+clean; 455 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 5.3 KiB/s wr, 23 op/s 2026-03-10T09:58:52.111 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:51 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:52.111 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:51 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:52.111 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:51 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:52.111 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:51 vm08 ceph-mon[55477]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "rgw zone modify", "realm_name": "myrealm1", "zonegroup_name": "myzonegroup1", "zone_name": "myzone1", "realm_token": "ewogICAgInJlYWxtX25hbWUiOiAibXlyZWFsbTEiLAogICAgInJlYWxtX2lkIjogIjFmNzZiN2EyLWYzOTAtNDUyMS1iYzY4LTQzZjQ3Yjk0MWUzZSIsCiAgICAiZW5kcG9pbnQiOiBudWxsLAogICAgImFjY2Vzc19rZXkiOiAiU1BQREpRNzI3UlpJMEVLTFIyUU8iLAogICAgInNlY3JldCI6ICJiZmUycUZ0MTRaSEFYSldiNmZqcWxOaTRvekZjS3VFbk1GU3RVZUs0Igp9", "zone_endpoints": ["http://192.168.123.102:5500", "http://192.168.123.108:5500"]}]: dispatch 2026-03-10T09:58:52.111 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:51 vm08 ceph-mon[55477]: from='mon.? -' entity='mon.' cmd=[{"prefix": "rgw zone modify", "realm_name": "myrealm1", "zonegroup_name": "myzonegroup1", "zone_name": "myzone1", "realm_token": "ewogICAgInJlYWxtX25hbWUiOiAibXlyZWFsbTEiLAogICAgInJlYWxtX2lkIjogIjFmNzZiN2EyLWYzOTAtNDUyMS1iYzY4LTQzZjQ3Yjk0MWUzZSIsCiAgICAiZW5kcG9pbnQiOiBudWxsLAogICAgImFjY2Vzc19rZXkiOiAiU1BQREpRNzI3UlpJMEVLTFIyUU8iLAogICAgInNlY3JldCI6ICJiZmUycUZ0MTRaSEFYSldiNmZqcWxOaTRvekZjS3VFbk1GU3RVZUs0Igp9", "zone_endpoints": ["http://192.168.123.102:5500", "http://192.168.123.108:5500"]}]: dispatch 2026-03-10T09:58:52.111 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:51 vm08 ceph-mon[55477]: Saving service rgw.myrealm1.myzone1 spec with placement count:2 2026-03-10T09:58:52.111 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:51 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:52.111 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:51 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:52.111 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:51 vm08 ceph-mon[55477]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:58:52.159 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:51 vm02 ceph-mon[54811]: pgmap v18: 129 pgs: 129 active+clean; 455 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 5.3 KiB/s wr, 23 op/s 2026-03-10T09:58:52.159 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:51 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:52.159 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:51 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:52.159 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:51 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:52.159 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:51 vm02 ceph-mon[54811]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "rgw zone modify", "realm_name": "myrealm1", "zonegroup_name": "myzonegroup1", "zone_name": "myzone1", "realm_token": "ewogICAgInJlYWxtX25hbWUiOiAibXlyZWFsbTEiLAogICAgInJlYWxtX2lkIjogIjFmNzZiN2EyLWYzOTAtNDUyMS1iYzY4LTQzZjQ3Yjk0MWUzZSIsCiAgICAiZW5kcG9pbnQiOiBudWxsLAogICAgImFjY2Vzc19rZXkiOiAiU1BQREpRNzI3UlpJMEVLTFIyUU8iLAogICAgInNlY3JldCI6ICJiZmUycUZ0MTRaSEFYSldiNmZqcWxOaTRvekZjS3VFbk1GU3RVZUs0Igp9", "zone_endpoints": ["http://192.168.123.102:5500", "http://192.168.123.108:5500"]}]: dispatch 2026-03-10T09:58:52.159 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:51 vm02 ceph-mon[54811]: from='mon.? -' entity='mon.' cmd=[{"prefix": "rgw zone modify", "realm_name": "myrealm1", "zonegroup_name": "myzonegroup1", "zone_name": "myzone1", "realm_token": "ewogICAgInJlYWxtX25hbWUiOiAibXlyZWFsbTEiLAogICAgInJlYWxtX2lkIjogIjFmNzZiN2EyLWYzOTAtNDUyMS1iYzY4LTQzZjQ3Yjk0MWUzZSIsCiAgICAiZW5kcG9pbnQiOiBudWxsLAogICAgImFjY2Vzc19rZXkiOiAiU1BQREpRNzI3UlpJMEVLTFIyUU8iLAogICAgInNlY3JldCI6ICJiZmUycUZ0MTRaSEFYSldiNmZqcWxOaTRvekZjS3VFbk1GU3RVZUs0Igp9", "zone_endpoints": ["http://192.168.123.102:5500", "http://192.168.123.108:5500"]}]: dispatch 2026-03-10T09:58:52.159 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:51 vm02 ceph-mon[54811]: Saving service rgw.myrealm1.myzone1 spec with placement count:2 2026-03-10T09:58:52.159 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:51 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:52.159 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:51 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:52.159 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:51 vm02 ceph-mon[54811]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:58:52.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:51 vm01 ceph-mon[51930]: pgmap v18: 129 pgs: 129 active+clean; 455 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 5.3 KiB/s wr, 23 op/s 2026-03-10T09:58:52.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:51 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:52.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:51 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:52.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:51 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:52.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:51 vm01 ceph-mon[51930]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "rgw zone modify", "realm_name": "myrealm1", "zonegroup_name": "myzonegroup1", "zone_name": "myzone1", "realm_token": "ewogICAgInJlYWxtX25hbWUiOiAibXlyZWFsbTEiLAogICAgInJlYWxtX2lkIjogIjFmNzZiN2EyLWYzOTAtNDUyMS1iYzY4LTQzZjQ3Yjk0MWUzZSIsCiAgICAiZW5kcG9pbnQiOiBudWxsLAogICAgImFjY2Vzc19rZXkiOiAiU1BQREpRNzI3UlpJMEVLTFIyUU8iLAogICAgInNlY3JldCI6ICJiZmUycUZ0MTRaSEFYSldiNmZqcWxOaTRvekZjS3VFbk1GU3RVZUs0Igp9", "zone_endpoints": ["http://192.168.123.102:5500", "http://192.168.123.108:5500"]}]: dispatch 2026-03-10T09:58:52.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:51 vm01 ceph-mon[51930]: from='mon.? -' entity='mon.' cmd=[{"prefix": "rgw zone modify", "realm_name": "myrealm1", "zonegroup_name": "myzonegroup1", "zone_name": "myzone1", "realm_token": "ewogICAgInJlYWxtX25hbWUiOiAibXlyZWFsbTEiLAogICAgInJlYWxtX2lkIjogIjFmNzZiN2EyLWYzOTAtNDUyMS1iYzY4LTQzZjQ3Yjk0MWUzZSIsCiAgICAiZW5kcG9pbnQiOiBudWxsLAogICAgImFjY2Vzc19rZXkiOiAiU1BQREpRNzI3UlpJMEVLTFIyUU8iLAogICAgInNlY3JldCI6ICJiZmUycUZ0MTRaSEFYSldiNmZqcWxOaTRvekZjS3VFbk1GU3RVZUs0Igp9", "zone_endpoints": ["http://192.168.123.102:5500", "http://192.168.123.108:5500"]}]: dispatch 2026-03-10T09:58:52.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:51 vm01 ceph-mon[51930]: Saving service rgw.myrealm1.myzone1 spec with placement count:2 2026-03-10T09:58:52.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:51 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:52.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:51 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:52.179 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:51 vm01 ceph-mon[51930]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:58:53.159 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:52 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:53.159 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:52 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:53.159 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:52 vm02 ceph-mon[54811]: pgmap v19: 129 pgs: 129 active+clean; 455 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 4.2 KiB/s wr, 18 op/s 2026-03-10T09:58:53.159 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:52 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:53.159 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:52 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:53.159 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:52 vm02 ceph-mon[54811]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:53.159 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:52 vm02 ceph-mon[54811]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:58:53.159 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:52 vm02 ceph-mon[54811]: Checking dashboard <-> RGW credentials 2026-03-10T09:58:53.159 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:52 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:53.159 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:52 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:53.159 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:52 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:53.159 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:52 vm02 ceph-mon[54811]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:58:53.159 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:52 vm02 ceph-mon[54811]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:53.159 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:52 vm02 ceph-mon[54811]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:58:53.159 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:52 vm02 ceph-mon[54811]: Checking dashboard <-> RGW credentials 2026-03-10T09:58:53.159 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:52 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:53.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:52 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:53.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:52 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:53.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:52 vm08 ceph-mon[55477]: pgmap v19: 129 pgs: 129 active+clean; 455 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 4.2 KiB/s wr, 18 op/s 2026-03-10T09:58:53.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:52 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:53.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:52 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:53.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:52 vm08 ceph-mon[55477]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:53.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:52 vm08 ceph-mon[55477]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:58:53.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:52 vm08 ceph-mon[55477]: Checking dashboard <-> RGW credentials 2026-03-10T09:58:53.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:52 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:53.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:52 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:53.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:52 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:53.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:52 vm08 ceph-mon[55477]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:58:53.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:52 vm08 ceph-mon[55477]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:53.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:52 vm08 ceph-mon[55477]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:58:53.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:52 vm08 ceph-mon[55477]: Checking dashboard <-> RGW credentials 2026-03-10T09:58:53.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:52 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:53.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:52 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:53.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:52 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:53.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:52 vm01 ceph-mon[51930]: pgmap v19: 129 pgs: 129 active+clean; 455 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 4.2 KiB/s wr, 18 op/s 2026-03-10T09:58:53.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:52 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:53.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:52 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:53.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:52 vm01 ceph-mon[51930]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:53.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:52 vm01 ceph-mon[51930]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:58:53.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:52 vm01 ceph-mon[51930]: Checking dashboard <-> RGW credentials 2026-03-10T09:58:53.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:52 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:53.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:52 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:53.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:52 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:53.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:52 vm01 ceph-mon[51930]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:58:53.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:52 vm01 ceph-mon[51930]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:58:53.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:52 vm01 ceph-mon[51930]: from='mgr.14418 192.168.123.101:0/2231843006' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:58:53.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:52 vm01 ceph-mon[51930]: Checking dashboard <-> RGW credentials 2026-03-10T09:58:53.430 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:52 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:54.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:54 vm08 ceph-mon[55477]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:54.408 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:54 vm02 ceph-mon[54811]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:54.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:54 vm01 ceph-mon[51930]: from='mgr.14418 ' entity='mgr.a' 2026-03-10T09:58:54.620 INFO:teuthology.orchestra.run.vm01.stderr:+ true 2026-03-10T09:58:54.620 INFO:teuthology.orchestra.run.vm01.stderr:++ jq -r '.[0].token' 2026-03-10T09:58:54.622 INFO:teuthology.orchestra.run.vm01.stderr:++ ceph rgw realm tokens 2026-03-10T09:58:55.178 INFO:teuthology.orchestra.run.vm01.stderr:+ TOKEN=ewogICAgInJlYWxtX25hbWUiOiAibXlyZWFsbTEiLAogICAgInJlYWxtX2lkIjogIjFmNzZiN2EyLWYzOTAtNDUyMS1iYzY4LTQzZjQ3Yjk0MWUzZSIsCiAgICAiZW5kcG9pbnQiOiAiaHR0cDovLzE5Mi4xNjguMTIzLjEwMjo1NTAwIiwKICAgICJhY2Nlc3Nfa2V5IjogIlNQUERKUTcyN1JaSTBFS0xSMlFPIiwKICAgICJzZWNyZXQiOiAiYmZlMnFGdDE0WkhBWEpXYjZmanFsTmk0b3pGY0t1RW5NRlN0VWVLNCIKfQ== 2026-03-10T09:58:55.178 INFO:teuthology.orchestra.run.vm01.stderr:+ echo ewogICAgInJlYWxtX25hbWUiOiAibXlyZWFsbTEiLAogICAgInJlYWxtX2lkIjogIjFmNzZiN2EyLWYzOTAtNDUyMS1iYzY4LTQzZjQ3Yjk0MWUzZSIsCiAgICAiZW5kcG9pbnQiOiAiaHR0cDovLzE5Mi4xNjguMTIzLjEwMjo1NTAwIiwKICAgICJhY2Nlc3Nfa2V5IjogIlNQUERKUTcyN1JaSTBFS0xSMlFPIiwKICAgICJzZWNyZXQiOiAiYmZlMnFGdDE0WkhBWEpXYjZmanFsTmk0b3pGY0t1RW5NRlN0VWVLNCIKfQ== 2026-03-10T09:58:55.178 INFO:teuthology.orchestra.run.vm01.stdout:ewogICAgInJlYWxtX25hbWUiOiAibXlyZWFsbTEiLAogICAgInJlYWxtX2lkIjogIjFmNzZiN2EyLWYzOTAtNDUyMS1iYzY4LTQzZjQ3Yjk0MWUzZSIsCiAgICAiZW5kcG9pbnQiOiAiaHR0cDovLzE5Mi4xNjguMTIzLjEwMjo1NTAwIiwKICAgICJhY2Nlc3Nfa2V5IjogIlNQUERKUTcyN1JaSTBFS0xSMlFPIiwKICAgICJzZWNyZXQiOiAiYmZlMnFGdDE0WkhBWEpXYjZmanFsTmk0b3pGY0t1RW5NRlN0VWVLNCIKfQ== 2026-03-10T09:58:55.178 INFO:teuthology.orchestra.run.vm01.stderr:+ '[' ewogICAgInJlYWxtX25hbWUiOiAibXlyZWFsbTEiLAogICAgInJlYWxtX2lkIjogIjFmNzZiN2EyLWYzOTAtNDUyMS1iYzY4LTQzZjQ3Yjk0MWUzZSIsCiAgICAiZW5kcG9pbnQiOiAiaHR0cDovLzE5Mi4xNjguMTIzLjEwMjo1NTAwIiwKICAgICJhY2Nlc3Nfa2V5IjogIlNQUERKUTcyN1JaSTBFS0xSMlFPIiwKICAgICJzZWNyZXQiOiAiYmZlMnFGdDE0WkhBWEpXYjZmanFsTmk0b3pGY0t1RW5NRlN0VWVLNCIKfQ== '!=' 'master zone has no endpoint' ']' 2026-03-10T09:58:55.178 INFO:teuthology.orchestra.run.vm01.stderr:+ break 2026-03-10T09:58:55.178 INFO:teuthology.orchestra.run.vm01.stderr:++ ceph rgw realm tokens 2026-03-10T09:58:55.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:55 vm08 ceph-mon[55477]: pgmap v20: 129 pgs: 129 active+clean; 455 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 3.2 KiB/s wr, 14 op/s 2026-03-10T09:58:55.408 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:55 vm02 ceph-mon[54811]: pgmap v20: 129 pgs: 129 active+clean; 455 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 3.2 KiB/s wr, 14 op/s 2026-03-10T09:58:55.429 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:55 vm01 ceph-mon[51930]: pgmap v20: 129 pgs: 129 active+clean; 455 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 3.2 KiB/s wr, 14 op/s 2026-03-10T09:58:55.727 INFO:teuthology.orchestra.run.vm01.stderr:+ TOKENS='[ 2026-03-10T09:58:55.727 INFO:teuthology.orchestra.run.vm01.stderr: { 2026-03-10T09:58:55.727 INFO:teuthology.orchestra.run.vm01.stderr: "realm": "myrealm1", 2026-03-10T09:58:55.727 INFO:teuthology.orchestra.run.vm01.stderr: "token": "ewogICAgInJlYWxtX25hbWUiOiAibXlyZWFsbTEiLAogICAgInJlYWxtX2lkIjogIjFmNzZiN2EyLWYzOTAtNDUyMS1iYzY4LTQzZjQ3Yjk0MWUzZSIsCiAgICAiZW5kcG9pbnQiOiAiaHR0cDovLzE5Mi4xNjguMTIzLjEwMjo1NTAwIiwKICAgICJhY2Nlc3Nfa2V5IjogIlNQUERKUTcyN1JaSTBFS0xSMlFPIiwKICAgICJzZWNyZXQiOiAiYmZlMnFGdDE0WkhBWEpXYjZmanFsTmk0b3pGY0t1RW5NRlN0VWVLNCIKfQ==" 2026-03-10T09:58:55.727 INFO:teuthology.orchestra.run.vm01.stderr: } 2026-03-10T09:58:55.727 INFO:teuthology.orchestra.run.vm01.stderr:]' 2026-03-10T09:58:55.727 INFO:teuthology.orchestra.run.vm01.stderr:+ jq --exit-status '.[0].realm == "myrealm1"' 2026-03-10T09:58:55.727 INFO:teuthology.orchestra.run.vm01.stderr:+ echo '[' '{' '"realm":' '"myrealm1",' '"token":' '"ewogICAgInJlYWxtX25hbWUiOiAibXlyZWFsbTEiLAogICAgInJlYWxtX2lkIjogIjFmNzZiN2EyLWYzOTAtNDUyMS1iYzY4LTQzZjQ3Yjk0MWUzZSIsCiAgICAiZW5kcG9pbnQiOiAiaHR0cDovLzE5Mi4xNjguMTIzLjEwMjo1NTAwIiwKICAgICJhY2Nlc3Nfa2V5IjogIlNQUERKUTcyN1JaSTBFS0xSMlFPIiwKICAgICJzZWNyZXQiOiAiYmZlMnFGdDE0WkhBWEpXYjZmanFsTmk0b3pGY0t1RW5NRlN0VWVLNCIKfQ=="' '}' ']' 2026-03-10T09:58:55.729 INFO:teuthology.orchestra.run.vm01.stdout:true 2026-03-10T09:58:55.729 INFO:teuthology.orchestra.run.vm01.stderr:+ echo '[' '{' '"realm":' '"myrealm1",' '"token":' '"ewogICAgInJlYWxtX25hbWUiOiAibXlyZWFsbTEiLAogICAgInJlYWxtX2lkIjogIjFmNzZiN2EyLWYzOTAtNDUyMS1iYzY4LTQzZjQ3Yjk0MWUzZSIsCiAgICAiZW5kcG9pbnQiOiAiaHR0cDovLzE5Mi4xNjguMTIzLjEwMjo1NTAwIiwKICAgICJhY2Nlc3Nfa2V5IjogIlNQUERKUTcyN1JaSTBFS0xSMlFPIiwKICAgICJzZWNyZXQiOiAiYmZlMnFGdDE0WkhBWEpXYjZmanFsTmk0b3pGY0t1RW5NRlN0VWVLNCIKfQ=="' '}' ']' 2026-03-10T09:58:55.729 INFO:teuthology.orchestra.run.vm01.stderr:+ jq --exit-status '.[0].token' 2026-03-10T09:58:55.731 INFO:teuthology.orchestra.run.vm01.stdout:"ewogICAgInJlYWxtX25hbWUiOiAibXlyZWFsbTEiLAogICAgInJlYWxtX2lkIjogIjFmNzZiN2EyLWYzOTAtNDUyMS1iYzY4LTQzZjQ3Yjk0MWUzZSIsCiAgICAiZW5kcG9pbnQiOiAiaHR0cDovLzE5Mi4xNjguMTIzLjEwMjo1NTAwIiwKICAgICJhY2Nlc3Nfa2V5IjogIlNQUERKUTcyN1JaSTBFS0xSMlFPIiwKICAgICJzZWNyZXQiOiAiYmZlMnFGdDE0WkhBWEpXYjZmanFsTmk0b3pGY0t1RW5NRlN0VWVLNCIKfQ==" 2026-03-10T09:58:55.733 INFO:teuthology.orchestra.run.vm01.stderr:++ ceph rgw realm tokens 2026-03-10T09:58:55.733 INFO:teuthology.orchestra.run.vm01.stderr:++ jq -r '.[0].token' 2026-03-10T09:58:55.733 INFO:teuthology.orchestra.run.vm01.stderr:++ base64 --decode 2026-03-10T09:58:56.289 INFO:teuthology.orchestra.run.vm01.stderr:+ TOKEN_JSON='{ 2026-03-10T09:58:56.289 INFO:teuthology.orchestra.run.vm01.stderr: "realm_name": "myrealm1", 2026-03-10T09:58:56.289 INFO:teuthology.orchestra.run.vm01.stderr: "realm_id": "1f76b7a2-f390-4521-bc68-43f47b941e3e", 2026-03-10T09:58:56.289 INFO:teuthology.orchestra.run.vm01.stderr: "endpoint": "http://192.168.123.102:5500", 2026-03-10T09:58:56.289 INFO:teuthology.orchestra.run.vm01.stderr: "access_key": "SPPDJQ727RZI0EKLR2QO", 2026-03-10T09:58:56.289 INFO:teuthology.orchestra.run.vm01.stderr: "secret": "bfe2qFt14ZHAXJWb6fjqlNi4ozFcKuEnMFStUeK4" 2026-03-10T09:58:56.289 INFO:teuthology.orchestra.run.vm01.stderr:}' 2026-03-10T09:58:56.289 INFO:teuthology.orchestra.run.vm01.stderr:+ jq --exit-status '.realm_name == "myrealm1"' 2026-03-10T09:58:56.289 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:56 vm01 ceph-mon[51930]: from='client.14925 -' entity='client.admin' cmd=[{"prefix": "rgw realm tokens", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:58:56.289 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:56 vm01 ceph-mon[51930]: from='client.15021 -' entity='client.admin' cmd=[{"prefix": "rgw realm tokens", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:58:56.290 INFO:teuthology.orchestra.run.vm01.stderr:+ echo '{' '"realm_name":' '"myrealm1",' '"realm_id":' '"1f76b7a2-f390-4521-bc68-43f47b941e3e",' '"endpoint":' '"http://192.168.123.102:5500",' '"access_key":' '"SPPDJQ727RZI0EKLR2QO",' '"secret":' '"bfe2qFt14ZHAXJWb6fjqlNi4ozFcKuEnMFStUeK4"' '}' 2026-03-10T09:58:56.290 INFO:teuthology.orchestra.run.vm01.stdout:true 2026-03-10T09:58:56.290 INFO:teuthology.orchestra.run.vm01.stderr:+ echo '{' '"realm_name":' '"myrealm1",' '"realm_id":' '"1f76b7a2-f390-4521-bc68-43f47b941e3e",' '"endpoint":' '"http://192.168.123.102:5500",' '"access_key":' '"SPPDJQ727RZI0EKLR2QO",' '"secret":' '"bfe2qFt14ZHAXJWb6fjqlNi4ozFcKuEnMFStUeK4"' '}' 2026-03-10T09:58:56.290 INFO:teuthology.orchestra.run.vm01.stderr:+ jq --exit-status '.endpoint | test("http://.+:\\d+")' 2026-03-10T09:58:56.292 INFO:teuthology.orchestra.run.vm01.stdout:true 2026-03-10T09:58:56.293 INFO:teuthology.orchestra.run.vm01.stderr:+ echo '{' '"realm_name":' '"myrealm1",' '"realm_id":' '"1f76b7a2-f390-4521-bc68-43f47b941e3e",' '"endpoint":' '"http://192.168.123.102:5500",' '"access_key":' '"SPPDJQ727RZI0EKLR2QO",' '"secret":' '"bfe2qFt14ZHAXJWb6fjqlNi4ozFcKuEnMFStUeK4"' '}' 2026-03-10T09:58:56.293 INFO:teuthology.orchestra.run.vm01.stderr:+ jq --exit-status '.realm_id | test("^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$")' 2026-03-10T09:58:56.299 INFO:teuthology.orchestra.run.vm01.stdout:true 2026-03-10T09:58:56.299 INFO:teuthology.orchestra.run.vm01.stderr:+ echo '{' '"realm_name":' '"myrealm1",' '"realm_id":' '"1f76b7a2-f390-4521-bc68-43f47b941e3e",' '"endpoint":' '"http://192.168.123.102:5500",' '"access_key":' '"SPPDJQ727RZI0EKLR2QO",' '"secret":' '"bfe2qFt14ZHAXJWb6fjqlNi4ozFcKuEnMFStUeK4"' '}' 2026-03-10T09:58:56.299 INFO:teuthology.orchestra.run.vm01.stderr:+ jq --exit-status .access_key 2026-03-10T09:58:56.301 INFO:teuthology.orchestra.run.vm01.stdout:"SPPDJQ727RZI0EKLR2QO" 2026-03-10T09:58:56.301 INFO:teuthology.orchestra.run.vm01.stderr:+ echo '{' '"realm_name":' '"myrealm1",' '"realm_id":' '"1f76b7a2-f390-4521-bc68-43f47b941e3e",' '"endpoint":' '"http://192.168.123.102:5500",' '"access_key":' '"SPPDJQ727RZI0EKLR2QO",' '"secret":' '"bfe2qFt14ZHAXJWb6fjqlNi4ozFcKuEnMFStUeK4"' '}' 2026-03-10T09:58:56.301 INFO:teuthology.orchestra.run.vm01.stderr:+ jq --exit-status .secret 2026-03-10T09:58:56.303 INFO:teuthology.orchestra.run.vm01.stdout:"bfe2qFt14ZHAXJWb6fjqlNi4ozFcKuEnMFStUeK4" 2026-03-10T09:58:56.360 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:56 vm08 ceph-mon[55477]: from='client.14925 -' entity='client.admin' cmd=[{"prefix": "rgw realm tokens", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:58:56.361 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:56 vm08 ceph-mon[55477]: from='client.15021 -' entity='client.admin' cmd=[{"prefix": "rgw realm tokens", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:58:56.408 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:56 vm02 ceph-mon[54811]: from='client.14925 -' entity='client.admin' cmd=[{"prefix": "rgw realm tokens", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:58:56.408 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:56 vm02 ceph-mon[54811]: from='client.15021 -' entity='client.admin' cmd=[{"prefix": "rgw realm tokens", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:58:56.444 DEBUG:teuthology.run_tasks:Unwinding manager cephadm 2026-03-10T09:58:56.446 INFO:tasks.cephadm:Teardown begin 2026-03-10T09:58:56.446 DEBUG:teuthology.orchestra.run.vm01:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T09:58:56.471 DEBUG:teuthology.orchestra.run.vm02:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T09:58:56.497 DEBUG:teuthology.orchestra.run.vm08:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T09:58:56.525 INFO:tasks.cephadm:Disabling cephadm mgr module 2026-03-10T09:58:56.525 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d -- ceph mgr module disable cephadm 2026-03-10T09:58:56.697 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/mon.a/config 2026-03-10T09:58:56.715 INFO:teuthology.orchestra.run.vm01.stderr:Error: statfs /etc/ceph/ceph.client.admin.keyring: no such file or directory 2026-03-10T09:58:56.735 DEBUG:teuthology.orchestra.run:got remote process result: 125 2026-03-10T09:58:56.735 INFO:tasks.cephadm:Cleaning up testdir ceph.* files... 2026-03-10T09:58:56.735 DEBUG:teuthology.orchestra.run.vm01:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-10T09:58:56.749 DEBUG:teuthology.orchestra.run.vm02:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-10T09:58:56.763 DEBUG:teuthology.orchestra.run.vm08:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-10T09:58:56.779 INFO:tasks.cephadm:Stopping all daemons... 2026-03-10T09:58:56.779 INFO:tasks.cephadm.mon.a:Stopping mon.a... 2026-03-10T09:58:56.780 DEBUG:teuthology.orchestra.run.vm01:> sudo systemctl stop ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d@mon.a 2026-03-10T09:58:57.167 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:56 vm01 systemd[1]: Stopping Ceph mon.a for 4533cc1c-1c67-11f1-85c0-e37e5114407d... 2026-03-10T09:58:57.167 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:56 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mon-a[51906]: 2026-03-10T09:58:56.897+0000 7fbc8005a640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T09:58:57.168 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:56 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mon-a[51906]: 2026-03-10T09:58:56.897+0000 7fbc8005a640 -1 mon.a@0(leader) e3 *** Got Signal Terminated *** 2026-03-10T09:58:57.168 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 09:58:57 vm01 podman[76043]: 2026-03-10 09:58:57.057296725 +0000 UTC m=+0.174523711 container died 4eaa42105425c83378dead67f28c30d54444b8cab9e462a3be67c7ad604626ca (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mon-a, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-10T09:58:57.239 DEBUG:teuthology.orchestra.run.vm01:> sudo pkill -f 'journalctl -f -n 0 -u ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d@mon.a.service' 2026-03-10T09:58:57.271 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T09:58:57.271 INFO:tasks.cephadm.mon.a:Stopped mon.a 2026-03-10T09:58:57.271 INFO:tasks.cephadm.mon.c:Stopping mon.b... 2026-03-10T09:58:57.271 DEBUG:teuthology.orchestra.run.vm02:> sudo systemctl stop ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d@mon.b 2026-03-10T09:58:57.604 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:57 vm02 systemd[1]: Stopping Ceph mon.b for 4533cc1c-1c67-11f1-85c0-e37e5114407d... 2026-03-10T09:58:57.604 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:57 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mon-b[54785]: 2026-03-10T09:58:57.382+0000 7fe6477d7640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.b -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T09:58:57.604 INFO:journalctl@ceph.mon.b.vm02.stdout:Mar 10 09:58:57 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mon-b[54785]: 2026-03-10T09:58:57.382+0000 7fe6477d7640 -1 mon.b@2(peon) e3 *** Got Signal Terminated *** 2026-03-10T09:58:57.802 DEBUG:teuthology.orchestra.run.vm02:> sudo pkill -f 'journalctl -f -n 0 -u ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d@mon.b.service' 2026-03-10T09:58:57.846 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T09:58:57.847 INFO:tasks.cephadm.mon.c:Stopped mon.b 2026-03-10T09:58:57.847 INFO:tasks.cephadm.mon.c:Stopping mon.c... 2026-03-10T09:58:57.847 DEBUG:teuthology.orchestra.run.vm08:> sudo systemctl stop ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d@mon.c 2026-03-10T09:58:58.122 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:57 vm08 systemd[1]: Stopping Ceph mon.c for 4533cc1c-1c67-11f1-85c0-e37e5114407d... 2026-03-10T09:58:58.122 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:57 vm08 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mon-c[55452]: 2026-03-10T09:58:57.951+0000 7efdc8a54640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.c -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T09:58:58.122 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:57 vm08 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mon-c[55452]: 2026-03-10T09:58:57.951+0000 7efdc8a54640 -1 mon.c@1(peon) e3 *** Got Signal Terminated *** 2026-03-10T09:58:58.122 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 09:58:58 vm08 podman[64780]: 2026-03-10 09:58:58.006931638 +0000 UTC m=+0.070785407 container died 94d054009b9423234954bab37992987140dd51df4c879312f737318931726c60 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mon-c, OSD_FLAVOR=default, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-10T09:58:58.183 DEBUG:teuthology.orchestra.run.vm08:> sudo pkill -f 'journalctl -f -n 0 -u ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d@mon.c.service' 2026-03-10T09:58:58.227 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T09:58:58.227 INFO:tasks.cephadm.mon.c:Stopped mon.c 2026-03-10T09:58:58.227 INFO:tasks.cephadm.mgr.a:Stopping mgr.a... 2026-03-10T09:58:58.227 DEBUG:teuthology.orchestra.run.vm01:> sudo systemctl stop ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d@mgr.a 2026-03-10T09:58:58.508 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:58:58 vm01 systemd[1]: Stopping Ceph mgr.a for 4533cc1c-1c67-11f1-85c0-e37e5114407d... 2026-03-10T09:58:58.508 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 09:58:58 vm01 podman[76155]: 2026-03-10 09:58:58.389092934 +0000 UTC m=+0.063718830 container died 7a8fdbe7e3b9741c41450eacf3578fdb33a9b92da3bce14bc390dee10ccc89bc (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-a, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T09:58:58.582 DEBUG:teuthology.orchestra.run.vm01:> sudo pkill -f 'journalctl -f -n 0 -u ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d@mgr.a.service' 2026-03-10T09:58:58.613 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T09:58:58.613 INFO:tasks.cephadm.mgr.a:Stopped mgr.a 2026-03-10T09:58:58.613 INFO:tasks.cephadm.mgr.b:Stopping mgr.b... 2026-03-10T09:58:58.613 DEBUG:teuthology.orchestra.run.vm02:> sudo systemctl stop ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d@mgr.b 2026-03-10T09:58:58.899 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:58:58 vm02 systemd[1]: Stopping Ceph mgr.b for 4533cc1c-1c67-11f1-85c0-e37e5114407d... 2026-03-10T09:58:58.899 INFO:journalctl@ceph.mgr.b.vm02.stdout:Mar 10 09:58:58 vm02 podman[64814]: 2026-03-10 09:58:58.784500022 +0000 UTC m=+0.072401905 container died 95c96277a3745f7a3f545fcef91bed7f148b417823e0e9864dff2123c5ba8026 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-mgr-b, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T09:58:58.973 DEBUG:teuthology.orchestra.run.vm02:> sudo pkill -f 'journalctl -f -n 0 -u ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d@mgr.b.service' 2026-03-10T09:58:59.014 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T09:58:59.014 INFO:tasks.cephadm.mgr.b:Stopped mgr.b 2026-03-10T09:58:59.014 INFO:tasks.cephadm.osd.0:Stopping osd.0... 2026-03-10T09:58:59.014 DEBUG:teuthology.orchestra.run.vm01:> sudo systemctl stop ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d@osd.0 2026-03-10T09:58:59.429 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 09:58:59 vm01 systemd[1]: Stopping Ceph osd.0 for 4533cc1c-1c67-11f1-85c0-e37e5114407d... 2026-03-10T09:58:59.429 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 09:58:59 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-osd-0[64932]: 2026-03-10T09:58:59.116+0000 7fc564a92640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T09:58:59.429 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 09:58:59 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-osd-0[64932]: 2026-03-10T09:58:59.116+0000 7fc564a92640 -1 osd.0 30 *** Got signal Terminated *** 2026-03-10T09:58:59.429 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 09:58:59 vm01 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-osd-0[64932]: 2026-03-10T09:58:59.116+0000 7fc564a92640 -1 osd.0 30 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T09:59:04.418 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 09:59:04 vm01 podman[76267]: 2026-03-10 09:59:04.152425803 +0000 UTC m=+5.051840859 container died 85d77d0d24c527af8b02bb3e0c0c6ae25f22c18bf3fc2a0679ec71badcb0dcb8 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-osd-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=squid, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-10T09:59:04.419 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 09:59:04 vm01 podman[76267]: 2026-03-10 09:59:04.273848224 +0000 UTC m=+5.173263280 container remove 85d77d0d24c527af8b02bb3e0c0c6ae25f22c18bf3fc2a0679ec71badcb0dcb8 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-osd-0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T09:59:04.419 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 09:59:04 vm01 bash[76267]: ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-osd-0 2026-03-10T09:59:04.419 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 09:59:04 vm01 podman[76345]: 2026-03-10 09:59:04.418526182 +0000 UTC m=+0.015891710 container create 69862e95591a38b85cfb2ca47b89afb9b95f28ff9885854bdd2ca76a8e280aed (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-osd-0-deactivate, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0) 2026-03-10T09:59:04.679 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 09:59:04 vm01 podman[76345]: 2026-03-10 09:59:04.454816923 +0000 UTC m=+0.052182451 container init 69862e95591a38b85cfb2ca47b89afb9b95f28ff9885854bdd2ca76a8e280aed (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-osd-0-deactivate, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) 2026-03-10T09:59:04.679 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 09:59:04 vm01 podman[76345]: 2026-03-10 09:59:04.459952904 +0000 UTC m=+0.057318432 container start 69862e95591a38b85cfb2ca47b89afb9b95f28ff9885854bdd2ca76a8e280aed (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-osd-0-deactivate, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T09:59:04.679 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 09:59:04 vm01 podman[76345]: 2026-03-10 09:59:04.462353446 +0000 UTC m=+0.059718994 container attach 69862e95591a38b85cfb2ca47b89afb9b95f28ff9885854bdd2ca76a8e280aed (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-osd-0-deactivate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20260223) 2026-03-10T09:59:04.679 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 09:59:04 vm01 podman[76345]: 2026-03-10 09:59:04.412425816 +0000 UTC m=+0.009791355 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T09:59:04.679 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 09:59:04 vm01 conmon[76355]: conmon 69862e95591a38b85cfb : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-69862e95591a38b85cfb2ca47b89afb9b95f28ff9885854bdd2ca76a8e280aed.scope/memory.events 2026-03-10T09:59:04.679 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 09:59:04 vm01 podman[76345]: 2026-03-10 09:59:04.587734425 +0000 UTC m=+0.185099953 container died 69862e95591a38b85cfb2ca47b89afb9b95f28ff9885854bdd2ca76a8e280aed (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-osd-0-deactivate, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid) 2026-03-10T09:59:04.736 DEBUG:teuthology.orchestra.run.vm01:> sudo pkill -f 'journalctl -f -n 0 -u ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d@osd.0.service' 2026-03-10T09:59:04.771 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T09:59:04.771 INFO:tasks.cephadm.osd.0:Stopped osd.0 2026-03-10T09:59:04.771 INFO:tasks.cephadm.osd.1:Stopping osd.1... 2026-03-10T09:59:04.771 DEBUG:teuthology.orchestra.run.vm02:> sudo systemctl stop ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d@osd.1 2026-03-10T09:59:05.159 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 09:59:04 vm02 systemd[1]: Stopping Ceph osd.1 for 4533cc1c-1c67-11f1-85c0-e37e5114407d... 2026-03-10T09:59:05.159 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 09:59:04 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-osd-1[59068]: 2026-03-10T09:59:04.889+0000 7f13c160c640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.1 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T09:59:05.159 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 09:59:04 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-osd-1[59068]: 2026-03-10T09:59:04.890+0000 7f13c160c640 -1 osd.1 30 *** Got signal Terminated *** 2026-03-10T09:59:05.159 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 09:59:04 vm02 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-osd-1[59068]: 2026-03-10T09:59:04.890+0000 7f13c160c640 -1 osd.1 30 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T09:59:10.204 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 09:59:09 vm02 podman[64927]: 2026-03-10 09:59:09.919595085 +0000 UTC m=+5.048392743 container died b72bda07ad7387a9d0908ff2a17bd232654c8654cf29c06b57ce55301badc3f8 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-osd-1, org.label-schema.build-date=20260223, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) 2026-03-10T09:59:10.204 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 09:59:10 vm02 podman[64927]: 2026-03-10 09:59:10.045551223 +0000 UTC m=+5.174348881 container remove b72bda07ad7387a9d0908ff2a17bd232654c8654cf29c06b57ce55301badc3f8 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-osd-1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-10T09:59:10.204 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 09:59:10 vm02 bash[64927]: ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-osd-1 2026-03-10T09:59:10.504 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 09:59:10 vm02 podman[65005]: 2026-03-10 09:59:10.204850049 +0000 UTC m=+0.017911318 container create 1ab4ab57258288d99b9f46c3347b3e176665c425f18dbbf51d711fc35a162012 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-osd-1-deactivate, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-10T09:59:10.505 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 09:59:10 vm02 podman[65005]: 2026-03-10 09:59:10.246083921 +0000 UTC m=+0.059145190 container init 1ab4ab57258288d99b9f46c3347b3e176665c425f18dbbf51d711fc35a162012 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-osd-1-deactivate, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-10T09:59:10.505 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 09:59:10 vm02 podman[65005]: 2026-03-10 09:59:10.25052518 +0000 UTC m=+0.063586449 container start 1ab4ab57258288d99b9f46c3347b3e176665c425f18dbbf51d711fc35a162012 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-osd-1-deactivate, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223) 2026-03-10T09:59:10.505 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 09:59:10 vm02 podman[65005]: 2026-03-10 09:59:10.251596365 +0000 UTC m=+0.064657634 container attach 1ab4ab57258288d99b9f46c3347b3e176665c425f18dbbf51d711fc35a162012 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-osd-1-deactivate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS) 2026-03-10T09:59:10.505 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 09:59:10 vm02 podman[65005]: 2026-03-10 09:59:10.197629117 +0000 UTC m=+0.010690397 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T09:59:10.505 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 09:59:10 vm02 podman[65005]: 2026-03-10 09:59:10.387402317 +0000 UTC m=+0.200463586 container died 1ab4ab57258288d99b9f46c3347b3e176665c425f18dbbf51d711fc35a162012 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-osd-1-deactivate, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-10T09:59:10.521 DEBUG:teuthology.orchestra.run.vm02:> sudo pkill -f 'journalctl -f -n 0 -u ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d@osd.1.service' 2026-03-10T09:59:10.565 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T09:59:10.565 INFO:tasks.cephadm.osd.1:Stopped osd.1 2026-03-10T09:59:10.565 INFO:tasks.cephadm.osd.2:Stopping osd.2... 2026-03-10T09:59:10.565 DEBUG:teuthology.orchestra.run.vm08:> sudo systemctl stop ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d@osd.2 2026-03-10T09:59:10.861 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 10 09:59:10 vm08 systemd[1]: Stopping Ceph osd.2 for 4533cc1c-1c67-11f1-85c0-e37e5114407d... 2026-03-10T09:59:10.861 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 10 09:59:10 vm08 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-osd-2[59550]: 2026-03-10T09:59:10.680+0000 7f4cff102640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.2 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T09:59:10.861 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 10 09:59:10 vm08 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-osd-2[59550]: 2026-03-10T09:59:10.680+0000 7f4cff102640 -1 osd.2 30 *** Got signal Terminated *** 2026-03-10T09:59:10.861 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 10 09:59:10 vm08 ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-osd-2[59550]: 2026-03-10T09:59:10.680+0000 7f4cff102640 -1 osd.2 30 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T09:59:15.987 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 10 09:59:15 vm08 podman[64894]: 2026-03-10 09:59:15.70316425 +0000 UTC m=+5.039856662 container died 40682bac199d581c1800ad95bfd5b53d3c9dfdce1d77410903461cbc44ab1cc5 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-osd-2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, CEPH_REF=squid) 2026-03-10T09:59:15.987 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 10 09:59:15 vm08 podman[64894]: 2026-03-10 09:59:15.828498523 +0000 UTC m=+5.165190935 container remove 40682bac199d581c1800ad95bfd5b53d3c9dfdce1d77410903461cbc44ab1cc5 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-osd-2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20260223) 2026-03-10T09:59:15.987 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 10 09:59:15 vm08 bash[64894]: ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-osd-2 2026-03-10T09:59:16.304 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 10 09:59:15 vm08 podman[64970]: 2026-03-10 09:59:15.98763877 +0000 UTC m=+0.018229191 container create 0383404b5aed0e0d16cadcced0b63df79be9ea9cbde0f125c22ab8c57c5c245f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-osd-2-deactivate, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-10T09:59:16.304 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 10 09:59:16 vm08 podman[64970]: 2026-03-10 09:59:16.03544817 +0000 UTC m=+0.066038600 container init 0383404b5aed0e0d16cadcced0b63df79be9ea9cbde0f125c22ab8c57c5c245f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-osd-2-deactivate, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-10T09:59:16.304 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 10 09:59:16 vm08 podman[64970]: 2026-03-10 09:59:16.040811471 +0000 UTC m=+0.071401892 container start 0383404b5aed0e0d16cadcced0b63df79be9ea9cbde0f125c22ab8c57c5c245f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-osd-2-deactivate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) 2026-03-10T09:59:16.304 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 10 09:59:16 vm08 podman[64970]: 2026-03-10 09:59:16.04506775 +0000 UTC m=+0.075658181 container attach 0383404b5aed0e0d16cadcced0b63df79be9ea9cbde0f125c22ab8c57c5c245f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-osd-2-deactivate, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True) 2026-03-10T09:59:16.304 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 10 09:59:16 vm08 podman[64970]: 2026-03-10 09:59:15.98065303 +0000 UTC m=+0.011243462 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T09:59:16.304 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 10 09:59:16 vm08 podman[64970]: 2026-03-10 09:59:16.188840735 +0000 UTC m=+0.219431156 container died 0383404b5aed0e0d16cadcced0b63df79be9ea9cbde0f125c22ab8c57c5c245f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d-osd-2-deactivate, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20260223, ceph=True, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=squid, org.label-schema.schema-version=1.0) 2026-03-10T09:59:16.327 DEBUG:teuthology.orchestra.run.vm08:> sudo pkill -f 'journalctl -f -n 0 -u ceph-4533cc1c-1c67-11f1-85c0-e37e5114407d@osd.2.service' 2026-03-10T09:59:16.379 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T09:59:16.379 INFO:tasks.cephadm.osd.2:Stopped osd.2 2026-03-10T09:59:16.379 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d --force --keep-logs 2026-03-10T09:59:16.514 INFO:teuthology.orchestra.run.vm01.stdout:Deleting cluster with fsid: 4533cc1c-1c67-11f1-85c0-e37e5114407d 2026-03-10T09:59:17.530 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d --force --keep-logs 2026-03-10T09:59:17.664 INFO:teuthology.orchestra.run.vm02.stdout:Deleting cluster with fsid: 4533cc1c-1c67-11f1-85c0-e37e5114407d 2026-03-10T09:59:29.119 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d --force --keep-logs 2026-03-10T09:59:29.249 INFO:teuthology.orchestra.run.vm08.stdout:Deleting cluster with fsid: 4533cc1c-1c67-11f1-85c0-e37e5114407d 2026-03-10T09:59:40.491 DEBUG:teuthology.orchestra.run.vm01:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T09:59:40.519 DEBUG:teuthology.orchestra.run.vm02:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T09:59:40.545 DEBUG:teuthology.orchestra.run.vm08:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T09:59:40.570 INFO:tasks.cephadm:Archiving crash dumps... 2026-03-10T09:59:40.570 DEBUG:teuthology.misc:Transferring archived files from vm01:/var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/crash to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/990/remote/vm01/crash 2026-03-10T09:59:40.570 DEBUG:teuthology.orchestra.run.vm01:> sudo tar c -f - -C /var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/crash -- . 2026-03-10T09:59:40.596 INFO:teuthology.orchestra.run.vm01.stderr:tar: /var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/crash: Cannot open: No such file or directory 2026-03-10T09:59:40.596 INFO:teuthology.orchestra.run.vm01.stderr:tar: Error is not recoverable: exiting now 2026-03-10T09:59:40.598 DEBUG:teuthology.misc:Transferring archived files from vm02:/var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/crash to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/990/remote/vm02/crash 2026-03-10T09:59:40.598 DEBUG:teuthology.orchestra.run.vm02:> sudo tar c -f - -C /var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/crash -- . 2026-03-10T09:59:40.624 INFO:teuthology.orchestra.run.vm02.stderr:tar: /var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/crash: Cannot open: No such file or directory 2026-03-10T09:59:40.624 INFO:teuthology.orchestra.run.vm02.stderr:tar: Error is not recoverable: exiting now 2026-03-10T09:59:40.625 DEBUG:teuthology.misc:Transferring archived files from vm08:/var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/crash to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/990/remote/vm08/crash 2026-03-10T09:59:40.625 DEBUG:teuthology.orchestra.run.vm08:> sudo tar c -f - -C /var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/crash -- . 2026-03-10T09:59:40.650 INFO:teuthology.orchestra.run.vm08.stderr:tar: /var/lib/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/crash: Cannot open: No such file or directory 2026-03-10T09:59:40.650 INFO:teuthology.orchestra.run.vm08.stderr:tar: Error is not recoverable: exiting now 2026-03-10T09:59:40.652 INFO:tasks.cephadm:Checking cluster log for badness... 2026-03-10T09:59:40.652 DEBUG:teuthology.orchestra.run.vm01:> sudo egrep '\[ERR\]|\[WRN\]|\[SEC\]' /var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph.log | egrep CEPHADM_ | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v MON_DOWN | egrep -v 'mons down' | egrep -v 'mon down' | egrep -v 'out of quorum' | egrep -v CEPHADM_STRAY_DAEMON | head -n 1 2026-03-10T09:59:40.680 INFO:tasks.cephadm:Compressing logs... 2026-03-10T09:59:40.681 DEBUG:teuthology.orchestra.run.vm01:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T09:59:40.722 DEBUG:teuthology.orchestra.run.vm02:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T09:59:40.724 DEBUG:teuthology.orchestra.run.vm08:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T09:59:40.746 INFO:teuthology.orchestra.run.vm01.stderr:find: gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-10T09:59:40.746 INFO:teuthology.orchestra.run.vm01.stderr:‘/var/log/rbd-target-api’: No such file or directory 2026-03-10T09:59:40.747 INFO:teuthology.orchestra.run.vm02.stderr:find: gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-10T09:59:40.747 INFO:teuthology.orchestra.run.vm02.stderr:‘/var/log/rbd-target-api’: No such file or directory 2026-03-10T09:59:40.748 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/cephadm.log: gzip -5 --verbose -- /var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph-mon.a.log 2026-03-10T09:59:40.748 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph-volume.log 2026-03-10T09:59:40.749 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/cephadm.log: 88.0% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-10T09:59:40.749 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph.log 2026-03-10T09:59:40.749 INFO:teuthology.orchestra.run.vm01.stderr: 90.2% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-10T09:59:40.749 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph-mon.a.log: gzip -5 --verbose -- /var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph.audit.log 2026-03-10T09:59:40.749 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph-mon.b.log 2026-03-10T09:59:40.749 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph.log: 84.2% -- replaced with /var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph.log.gz 2026-03-10T09:59:40.750 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-10T09:59:40.750 INFO:teuthology.orchestra.run.vm08.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-10T09:59:40.750 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph-volume.log: gzip -5 --verbose -- /var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph.cephadm.log 2026-03-10T09:59:40.750 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph-mon.b.log: 94.7% -- replaced with /var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph-volume.log.gz 2026-03-10T09:59:40.751 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph.audit.log 2026-03-10T09:59:40.751 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph-volume.log 2026-03-10T09:59:40.751 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph.cephadm.log: 80.6% -- replaced with /var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph.cephadm.log.gz 2026-03-10T09:59:40.751 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph.log 2026-03-10T09:59:40.752 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/cephadm.log: gzip -5 --verbose -- /var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph-mon.c.log 2026-03-10T09:59:40.752 INFO:teuthology.orchestra.run.vm08.stderr: 88.7% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-10T09:59:40.753 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph.audit.log: 89.2% -- replaced with /var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph.audit.log.gz 2026-03-10T09:59:40.753 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph-mgr.b.log 2026-03-10T09:59:40.753 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph-volume.log: gzip -5 --verbose -- /var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph.cephadm.log 2026-03-10T09:59:40.753 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph.log: 83.3% -- replaced with /var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph.log.gz 2026-03-10T09:59:40.753 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph-osd.1.log 2026-03-10T09:59:40.754 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph-mon.c.log: 94.7% -- replaced with /var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph-volume.log.gz 2026-03-10T09:59:40.754 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph.audit.log 2026-03-10T09:59:40.754 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph.cephadm.log: 80.6% -- replaced with /var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph.cephadm.log.gz 2026-03-10T09:59:40.755 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph.log 2026-03-10T09:59:40.755 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph-mgr.a.log 2026-03-10T09:59:40.755 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph.audit.log: 89.2% -- replaced with /var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph.audit.log.gz 2026-03-10T09:59:40.756 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph-osd.2.log 2026-03-10T09:59:40.756 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph.audit.log: 89.2% -- replaced with /var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph.audit.log.gz 2026-03-10T09:59:40.756 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph.log: 83.3% -- replaced with /var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph.log.gz 2026-03-10T09:59:40.757 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph-mgr.b.log: 89.9% -- replaced with /var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph-mgr.b.log.gz 2026-03-10T09:59:40.757 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph-client.rgw.myrealm1.myzone1.vm08.xyptdc.log 2026-03-10T09:59:40.758 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph.cephadm.log 2026-03-10T09:59:40.758 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph-client.rgw.myrealm1.myzone1.vm02.absswh.log 2026-03-10T09:59:40.762 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph-mgr.a.log: gzip -5 --verbose -- /var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph-volume.log 2026-03-10T09:59:40.763 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph-osd.2.log: /var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph-client.rgw.myrealm1.myzone1.vm08.xyptdc.log: 63.3% -- replaced with /var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph-client.rgw.myrealm1.myzone1.vm08.xyptdc.log.gz 2026-03-10T09:59:40.764 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph.cephadm.log: 82.8% -- replaced with /var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph.cephadm.log.gz 2026-03-10T09:59:40.766 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph-osd.1.log: /var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph-client.rgw.myrealm1.myzone1.vm02.absswh.log: 63.4% -- replaced with /var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph-client.rgw.myrealm1.myzone1.vm02.absswh.log.gz 2026-03-10T09:59:40.766 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph-osd.0.log 2026-03-10T09:59:40.770 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph-volume.log: 94.8% -- replaced with /var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph-volume.log.gz 2026-03-10T09:59:40.772 INFO:teuthology.orchestra.run.vm02.stderr: 92.3% -- replaced with /var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph-mon.b.log.gz 2026-03-10T09:59:40.788 INFO:teuthology.orchestra.run.vm08.stderr: 92.4% -- replaced with /var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph-mon.c.log.gz 2026-03-10T09:59:40.792 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph-osd.0.log: 90.0% -- replaced with /var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph-mgr.a.log.gz 2026-03-10T09:59:40.856 INFO:teuthology.orchestra.run.vm01.stderr: 91.9% -- replaced with /var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph-mon.a.log.gz 2026-03-10T09:59:40.945 INFO:teuthology.orchestra.run.vm08.stderr: 94.5% -- replaced with /var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph-osd.2.log.gz 2026-03-10T09:59:40.947 INFO:teuthology.orchestra.run.vm08.stderr: 2026-03-10T09:59:40.948 INFO:teuthology.orchestra.run.vm08.stderr:real 0m0.209s 2026-03-10T09:59:40.948 INFO:teuthology.orchestra.run.vm08.stderr:user 0m0.218s 2026-03-10T09:59:40.948 INFO:teuthology.orchestra.run.vm08.stderr:sys 0m0.031s 2026-03-10T09:59:40.948 INFO:teuthology.orchestra.run.vm02.stderr: 94.5% -- replaced with /var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph-osd.1.log.gz 2026-03-10T09:59:40.951 INFO:teuthology.orchestra.run.vm02.stderr: 2026-03-10T09:59:40.951 INFO:teuthology.orchestra.run.vm02.stderr:real 0m0.213s 2026-03-10T09:59:40.951 INFO:teuthology.orchestra.run.vm02.stderr:user 0m0.210s 2026-03-10T09:59:40.951 INFO:teuthology.orchestra.run.vm02.stderr:sys 0m0.031s 2026-03-10T09:59:40.971 INFO:teuthology.orchestra.run.vm01.stderr: 94.4% -- replaced with /var/log/ceph/4533cc1c-1c67-11f1-85c0-e37e5114407d/ceph-osd.0.log.gz 2026-03-10T09:59:40.972 INFO:teuthology.orchestra.run.vm01.stderr: 2026-03-10T09:59:40.972 INFO:teuthology.orchestra.run.vm01.stderr:real 0m0.237s 2026-03-10T09:59:40.973 INFO:teuthology.orchestra.run.vm01.stderr:user 0m0.312s 2026-03-10T09:59:40.973 INFO:teuthology.orchestra.run.vm01.stderr:sys 0m0.031s 2026-03-10T09:59:40.973 INFO:tasks.cephadm:Archiving logs... 2026-03-10T09:59:40.973 DEBUG:teuthology.misc:Transferring archived files from vm01:/var/log/ceph to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/990/remote/vm01/log 2026-03-10T09:59:40.973 DEBUG:teuthology.orchestra.run.vm01:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-10T09:59:41.062 DEBUG:teuthology.misc:Transferring archived files from vm02:/var/log/ceph to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/990/remote/vm02/log 2026-03-10T09:59:41.062 DEBUG:teuthology.orchestra.run.vm02:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-10T09:59:41.106 DEBUG:teuthology.misc:Transferring archived files from vm08:/var/log/ceph to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/990/remote/vm08/log 2026-03-10T09:59:41.106 DEBUG:teuthology.orchestra.run.vm08:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-10T09:59:41.143 INFO:tasks.cephadm:Removing cluster... 2026-03-10T09:59:41.144 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d --force 2026-03-10T09:59:41.270 INFO:teuthology.orchestra.run.vm01.stdout:Deleting cluster with fsid: 4533cc1c-1c67-11f1-85c0-e37e5114407d 2026-03-10T09:59:41.482 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d --force 2026-03-10T09:59:41.610 INFO:teuthology.orchestra.run.vm02.stdout:Deleting cluster with fsid: 4533cc1c-1c67-11f1-85c0-e37e5114407d 2026-03-10T09:59:41.816 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 4533cc1c-1c67-11f1-85c0-e37e5114407d --force 2026-03-10T09:59:41.939 INFO:teuthology.orchestra.run.vm08.stdout:Deleting cluster with fsid: 4533cc1c-1c67-11f1-85c0-e37e5114407d 2026-03-10T09:59:42.136 INFO:tasks.cephadm:Removing cephadm ... 2026-03-10T09:59:42.136 DEBUG:teuthology.orchestra.run.vm01:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-10T09:59:42.154 DEBUG:teuthology.orchestra.run.vm02:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-10T09:59:42.169 DEBUG:teuthology.orchestra.run.vm08:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-10T09:59:42.184 INFO:tasks.cephadm:Teardown complete 2026-03-10T09:59:42.184 DEBUG:teuthology.run_tasks:Unwinding manager install 2026-03-10T09:59:42.186 INFO:teuthology.task.install.util:Removing shipped files: /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer... 2026-03-10T09:59:42.186 DEBUG:teuthology.orchestra.run.vm01:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-10T09:59:42.196 DEBUG:teuthology.orchestra.run.vm02:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-10T09:59:42.211 DEBUG:teuthology.orchestra.run.vm08:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-10T09:59:42.256 INFO:teuthology.task.install.rpm:Removing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd on rpm system. 2026-03-10T09:59:42.256 DEBUG:teuthology.orchestra.run.vm01:> 2026-03-10T09:59:42.256 DEBUG:teuthology.orchestra.run.vm01:> for d in ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd ; do 2026-03-10T09:59:42.256 DEBUG:teuthology.orchestra.run.vm01:> sudo yum -y remove $d || true 2026-03-10T09:59:42.256 DEBUG:teuthology.orchestra.run.vm01:> done 2026-03-10T09:59:42.264 INFO:teuthology.task.install.rpm:Removing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd on rpm system. 2026-03-10T09:59:42.264 DEBUG:teuthology.orchestra.run.vm02:> 2026-03-10T09:59:42.264 DEBUG:teuthology.orchestra.run.vm02:> for d in ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd ; do 2026-03-10T09:59:42.264 DEBUG:teuthology.orchestra.run.vm02:> sudo yum -y remove $d || true 2026-03-10T09:59:42.264 DEBUG:teuthology.orchestra.run.vm02:> done 2026-03-10T09:59:42.269 INFO:teuthology.task.install.rpm:Removing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd on rpm system. 2026-03-10T09:59:42.270 DEBUG:teuthology.orchestra.run.vm08:> 2026-03-10T09:59:42.270 DEBUG:teuthology.orchestra.run.vm08:> for d in ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd ; do 2026-03-10T09:59:42.270 DEBUG:teuthology.orchestra.run.vm08:> sudo yum -y remove $d || true 2026-03-10T09:59:42.270 DEBUG:teuthology.orchestra.run.vm08:> done 2026-03-10T09:59:42.445 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T09:59:42.445 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T09:59:42.445 INFO:teuthology.orchestra.run.vm01.stdout: Package Arch Version Repository Size 2026-03-10T09:59:42.445 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T09:59:42.446 INFO:teuthology.orchestra.run.vm01.stdout:Removing: 2026-03-10T09:59:42.446 INFO:teuthology.orchestra.run.vm01.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 39 M 2026-03-10T09:59:42.446 INFO:teuthology.orchestra.run.vm01.stdout:Removing unused dependencies: 2026-03-10T09:59:42.446 INFO:teuthology.orchestra.run.vm01.stdout: mailcap noarch 2.1.49-5.el9 @baseos 78 k 2026-03-10T09:59:42.446 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:59:42.446 INFO:teuthology.orchestra.run.vm01.stdout:Transaction Summary 2026-03-10T09:59:42.446 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T09:59:42.446 INFO:teuthology.orchestra.run.vm01.stdout:Remove 2 Packages 2026-03-10T09:59:42.446 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:59:42.446 INFO:teuthology.orchestra.run.vm01.stdout:Freed space: 39 M 2026-03-10T09:59:42.446 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction check 2026-03-10T09:59:42.448 INFO:teuthology.orchestra.run.vm01.stdout:Transaction check succeeded. 2026-03-10T09:59:42.448 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction test 2026-03-10T09:59:42.462 INFO:teuthology.orchestra.run.vm01.stdout:Transaction test succeeded. 2026-03-10T09:59:42.462 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction 2026-03-10T09:59:42.464 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T09:59:42.464 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T09:59:42.464 INFO:teuthology.orchestra.run.vm02.stdout: Package Arch Version Repository Size 2026-03-10T09:59:42.464 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T09:59:42.464 INFO:teuthology.orchestra.run.vm02.stdout:Removing: 2026-03-10T09:59:42.464 INFO:teuthology.orchestra.run.vm02.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 39 M 2026-03-10T09:59:42.465 INFO:teuthology.orchestra.run.vm02.stdout:Removing unused dependencies: 2026-03-10T09:59:42.465 INFO:teuthology.orchestra.run.vm02.stdout: mailcap noarch 2.1.49-5.el9 @baseos 78 k 2026-03-10T09:59:42.465 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:59:42.465 INFO:teuthology.orchestra.run.vm02.stdout:Transaction Summary 2026-03-10T09:59:42.465 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T09:59:42.465 INFO:teuthology.orchestra.run.vm02.stdout:Remove 2 Packages 2026-03-10T09:59:42.465 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:59:42.465 INFO:teuthology.orchestra.run.vm02.stdout:Freed space: 39 M 2026-03-10T09:59:42.465 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction check 2026-03-10T09:59:42.467 INFO:teuthology.orchestra.run.vm02.stdout:Transaction check succeeded. 2026-03-10T09:59:42.467 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction test 2026-03-10T09:59:42.474 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T09:59:42.474 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T09:59:42.474 INFO:teuthology.orchestra.run.vm08.stdout: Package Arch Version Repository Size 2026-03-10T09:59:42.474 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T09:59:42.474 INFO:teuthology.orchestra.run.vm08.stdout:Removing: 2026-03-10T09:59:42.474 INFO:teuthology.orchestra.run.vm08.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 39 M 2026-03-10T09:59:42.474 INFO:teuthology.orchestra.run.vm08.stdout:Removing unused dependencies: 2026-03-10T09:59:42.474 INFO:teuthology.orchestra.run.vm08.stdout: mailcap noarch 2.1.49-5.el9 @baseos 78 k 2026-03-10T09:59:42.474 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:59:42.474 INFO:teuthology.orchestra.run.vm08.stdout:Transaction Summary 2026-03-10T09:59:42.474 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T09:59:42.474 INFO:teuthology.orchestra.run.vm08.stdout:Remove 2 Packages 2026-03-10T09:59:42.474 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:59:42.474 INFO:teuthology.orchestra.run.vm08.stdout:Freed space: 39 M 2026-03-10T09:59:42.475 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction check 2026-03-10T09:59:42.477 INFO:teuthology.orchestra.run.vm08.stdout:Transaction check succeeded. 2026-03-10T09:59:42.477 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction test 2026-03-10T09:59:42.481 INFO:teuthology.orchestra.run.vm02.stdout:Transaction test succeeded. 2026-03-10T09:59:42.481 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction 2026-03-10T09:59:42.489 INFO:teuthology.orchestra.run.vm08.stdout:Transaction test succeeded. 2026-03-10T09:59:42.489 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction 2026-03-10T09:59:42.493 INFO:teuthology.orchestra.run.vm01.stdout: Preparing : 1/1 2026-03-10T09:59:42.513 INFO:teuthology.orchestra.run.vm02.stdout: Preparing : 1/1 2026-03-10T09:59:42.517 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T09:59:42.517 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:59:42.517 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-10T09:59:42.517 INFO:teuthology.orchestra.run.vm01.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-radosgw.target". 2026-03-10T09:59:42.517 INFO:teuthology.orchestra.run.vm01.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-radosgw.target". 2026-03-10T09:59:42.517 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:59:42.519 INFO:teuthology.orchestra.run.vm08.stdout: Preparing : 1/1 2026-03-10T09:59:42.520 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T09:59:42.530 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T09:59:42.535 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T09:59:42.536 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:59:42.536 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-10T09:59:42.536 INFO:teuthology.orchestra.run.vm02.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-radosgw.target". 2026-03-10T09:59:42.536 INFO:teuthology.orchestra.run.vm02.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-radosgw.target". 2026-03-10T09:59:42.536 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:59:42.539 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T09:59:42.544 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T09:59:42.544 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:59:42.544 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-10T09:59:42.544 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-radosgw.target". 2026-03-10T09:59:42.544 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-radosgw.target". 2026-03-10T09:59:42.544 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:59:42.544 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T09:59:42.547 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T09:59:42.548 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T09:59:42.556 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T09:59:42.562 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T09:59:42.570 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T09:59:42.623 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T09:59:42.623 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T09:59:42.636 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T09:59:42.637 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T09:59:42.645 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T09:59:42.645 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T09:59:42.684 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T09:59:42.684 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:59:42.684 INFO:teuthology.orchestra.run.vm01.stdout:Removed: 2026-03-10T09:59:42.684 INFO:teuthology.orchestra.run.vm01.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 mailcap-2.1.49-5.el9.noarch 2026-03-10T09:59:42.684 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:59:42.684 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T09:59:42.699 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T09:59:42.700 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:59:42.700 INFO:teuthology.orchestra.run.vm02.stdout:Removed: 2026-03-10T09:59:42.700 INFO:teuthology.orchestra.run.vm02.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 mailcap-2.1.49-5.el9.noarch 2026-03-10T09:59:42.700 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:59:42.700 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T09:59:42.702 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T09:59:42.702 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:59:42.702 INFO:teuthology.orchestra.run.vm08.stdout:Removed: 2026-03-10T09:59:42.702 INFO:teuthology.orchestra.run.vm08.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 mailcap-2.1.49-5.el9.noarch 2026-03-10T09:59:42.702 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:59:42.702 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T09:59:42.906 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T09:59:42.907 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T09:59:42.907 INFO:teuthology.orchestra.run.vm01.stdout: Package Arch Version Repository Size 2026-03-10T09:59:42.907 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T09:59:42.907 INFO:teuthology.orchestra.run.vm01.stdout:Removing: 2026-03-10T09:59:42.907 INFO:teuthology.orchestra.run.vm01.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 210 M 2026-03-10T09:59:42.907 INFO:teuthology.orchestra.run.vm01.stdout:Removing unused dependencies: 2026-03-10T09:59:42.907 INFO:teuthology.orchestra.run.vm01.stdout: libxslt x86_64 1.1.34-12.el9 @appstream 743 k 2026-03-10T09:59:42.907 INFO:teuthology.orchestra.run.vm01.stdout: socat x86_64 1.7.4.1-8.el9 @appstream 1.1 M 2026-03-10T09:59:42.907 INFO:teuthology.orchestra.run.vm01.stdout: xmlstarlet x86_64 1.6.1-20.el9 @appstream 195 k 2026-03-10T09:59:42.907 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:59:42.907 INFO:teuthology.orchestra.run.vm01.stdout:Transaction Summary 2026-03-10T09:59:42.907 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T09:59:42.907 INFO:teuthology.orchestra.run.vm01.stdout:Remove 4 Packages 2026-03-10T09:59:42.907 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:59:42.907 INFO:teuthology.orchestra.run.vm01.stdout:Freed space: 212 M 2026-03-10T09:59:42.907 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction check 2026-03-10T09:59:42.910 INFO:teuthology.orchestra.run.vm01.stdout:Transaction check succeeded. 2026-03-10T09:59:42.910 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction test 2026-03-10T09:59:42.922 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T09:59:42.923 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T09:59:42.923 INFO:teuthology.orchestra.run.vm08.stdout: Package Arch Version Repository Size 2026-03-10T09:59:42.923 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T09:59:42.923 INFO:teuthology.orchestra.run.vm08.stdout:Removing: 2026-03-10T09:59:42.923 INFO:teuthology.orchestra.run.vm08.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 210 M 2026-03-10T09:59:42.923 INFO:teuthology.orchestra.run.vm08.stdout:Removing unused dependencies: 2026-03-10T09:59:42.923 INFO:teuthology.orchestra.run.vm08.stdout: libxslt x86_64 1.1.34-12.el9 @appstream 743 k 2026-03-10T09:59:42.923 INFO:teuthology.orchestra.run.vm08.stdout: socat x86_64 1.7.4.1-8.el9 @appstream 1.1 M 2026-03-10T09:59:42.923 INFO:teuthology.orchestra.run.vm08.stdout: xmlstarlet x86_64 1.6.1-20.el9 @appstream 195 k 2026-03-10T09:59:42.923 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:59:42.923 INFO:teuthology.orchestra.run.vm08.stdout:Transaction Summary 2026-03-10T09:59:42.923 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T09:59:42.923 INFO:teuthology.orchestra.run.vm08.stdout:Remove 4 Packages 2026-03-10T09:59:42.923 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:59:42.923 INFO:teuthology.orchestra.run.vm08.stdout:Freed space: 212 M 2026-03-10T09:59:42.923 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction check 2026-03-10T09:59:42.926 INFO:teuthology.orchestra.run.vm08.stdout:Transaction check succeeded. 2026-03-10T09:59:42.926 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction test 2026-03-10T09:59:42.934 INFO:teuthology.orchestra.run.vm01.stdout:Transaction test succeeded. 2026-03-10T09:59:42.934 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction 2026-03-10T09:59:42.950 INFO:teuthology.orchestra.run.vm08.stdout:Transaction test succeeded. 2026-03-10T09:59:42.950 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction 2026-03-10T09:59:42.953 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T09:59:42.953 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T09:59:42.953 INFO:teuthology.orchestra.run.vm02.stdout: Package Arch Version Repository Size 2026-03-10T09:59:42.953 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T09:59:42.954 INFO:teuthology.orchestra.run.vm02.stdout:Removing: 2026-03-10T09:59:42.954 INFO:teuthology.orchestra.run.vm02.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 210 M 2026-03-10T09:59:42.954 INFO:teuthology.orchestra.run.vm02.stdout:Removing unused dependencies: 2026-03-10T09:59:42.954 INFO:teuthology.orchestra.run.vm02.stdout: libxslt x86_64 1.1.34-12.el9 @appstream 743 k 2026-03-10T09:59:42.954 INFO:teuthology.orchestra.run.vm02.stdout: socat x86_64 1.7.4.1-8.el9 @appstream 1.1 M 2026-03-10T09:59:42.954 INFO:teuthology.orchestra.run.vm02.stdout: xmlstarlet x86_64 1.6.1-20.el9 @appstream 195 k 2026-03-10T09:59:42.954 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:59:42.954 INFO:teuthology.orchestra.run.vm02.stdout:Transaction Summary 2026-03-10T09:59:42.954 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T09:59:42.954 INFO:teuthology.orchestra.run.vm02.stdout:Remove 4 Packages 2026-03-10T09:59:42.954 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:59:42.954 INFO:teuthology.orchestra.run.vm02.stdout:Freed space: 212 M 2026-03-10T09:59:42.954 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction check 2026-03-10T09:59:42.957 INFO:teuthology.orchestra.run.vm02.stdout:Transaction check succeeded. 2026-03-10T09:59:42.957 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction test 2026-03-10T09:59:42.982 INFO:teuthology.orchestra.run.vm02.stdout:Transaction test succeeded. 2026-03-10T09:59:42.982 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction 2026-03-10T09:59:42.997 INFO:teuthology.orchestra.run.vm01.stdout: Preparing : 1/1 2026-03-10T09:59:43.003 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-10T09:59:43.005 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : xmlstarlet-1.6.1-20.el9.x86_64 2/4 2026-03-10T09:59:43.008 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : libxslt-1.1.34-12.el9.x86_64 3/4 2026-03-10T09:59:43.013 INFO:teuthology.orchestra.run.vm08.stdout: Preparing : 1/1 2026-03-10T09:59:43.019 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-10T09:59:43.021 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : xmlstarlet-1.6.1-20.el9.x86_64 2/4 2026-03-10T09:59:43.023 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-10T09:59:43.023 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libxslt-1.1.34-12.el9.x86_64 3/4 2026-03-10T09:59:43.038 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-10T09:59:43.043 INFO:teuthology.orchestra.run.vm02.stdout: Preparing : 1/1 2026-03-10T09:59:43.049 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-10T09:59:43.052 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : xmlstarlet-1.6.1-20.el9.x86_64 2/4 2026-03-10T09:59:43.055 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : libxslt-1.1.34-12.el9.x86_64 3/4 2026-03-10T09:59:43.071 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-10T09:59:43.085 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-10T09:59:43.085 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-10T09:59:43.085 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 2/4 2026-03-10T09:59:43.085 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 3/4 2026-03-10T09:59:43.099 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-10T09:59:43.099 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-10T09:59:43.099 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 2/4 2026-03-10T09:59:43.099 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 3/4 2026-03-10T09:59:43.137 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 4/4 2026-03-10T09:59:43.137 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:59:43.137 INFO:teuthology.orchestra.run.vm01.stdout:Removed: 2026-03-10T09:59:43.137 INFO:teuthology.orchestra.run.vm01.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 libxslt-1.1.34-12.el9.x86_64 2026-03-10T09:59:43.137 INFO:teuthology.orchestra.run.vm01.stdout: socat-1.7.4.1-8.el9.x86_64 xmlstarlet-1.6.1-20.el9.x86_64 2026-03-10T09:59:43.137 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:59:43.137 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T09:59:43.139 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-10T09:59:43.140 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-10T09:59:43.140 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 2/4 2026-03-10T09:59:43.140 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 3/4 2026-03-10T09:59:43.145 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 4/4 2026-03-10T09:59:43.145 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:59:43.145 INFO:teuthology.orchestra.run.vm08.stdout:Removed: 2026-03-10T09:59:43.145 INFO:teuthology.orchestra.run.vm08.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 libxslt-1.1.34-12.el9.x86_64 2026-03-10T09:59:43.145 INFO:teuthology.orchestra.run.vm08.stdout: socat-1.7.4.1-8.el9.x86_64 xmlstarlet-1.6.1-20.el9.x86_64 2026-03-10T09:59:43.145 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:59:43.145 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T09:59:43.200 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 4/4 2026-03-10T09:59:43.200 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:59:43.200 INFO:teuthology.orchestra.run.vm02.stdout:Removed: 2026-03-10T09:59:43.200 INFO:teuthology.orchestra.run.vm02.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 libxslt-1.1.34-12.el9.x86_64 2026-03-10T09:59:43.200 INFO:teuthology.orchestra.run.vm02.stdout: socat-1.7.4.1-8.el9.x86_64 xmlstarlet-1.6.1-20.el9.x86_64 2026-03-10T09:59:43.200 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:59:43.200 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T09:59:43.359 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T09:59:43.360 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T09:59:43.360 INFO:teuthology.orchestra.run.vm01.stdout: Package Arch Version Repository Size 2026-03-10T09:59:43.360 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T09:59:43.361 INFO:teuthology.orchestra.run.vm01.stdout:Removing: 2026-03-10T09:59:43.361 INFO:teuthology.orchestra.run.vm01.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 0 2026-03-10T09:59:43.361 INFO:teuthology.orchestra.run.vm01.stdout:Removing unused dependencies: 2026-03-10T09:59:43.361 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 7.5 M 2026-03-10T09:59:43.361 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 18 M 2026-03-10T09:59:43.361 INFO:teuthology.orchestra.run.vm01.stdout: lua x86_64 5.4.4-4.el9 @appstream 593 k 2026-03-10T09:59:43.361 INFO:teuthology.orchestra.run.vm01.stdout: lua-devel x86_64 5.4.4-4.el9 @crb 49 k 2026-03-10T09:59:43.361 INFO:teuthology.orchestra.run.vm01.stdout: luarocks noarch 3.9.2-5.el9 @epel 692 k 2026-03-10T09:59:43.361 INFO:teuthology.orchestra.run.vm01.stdout: unzip x86_64 6.0-59.el9 @baseos 389 k 2026-03-10T09:59:43.361 INFO:teuthology.orchestra.run.vm01.stdout: zip x86_64 3.0-35.el9 @baseos 724 k 2026-03-10T09:59:43.361 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:59:43.361 INFO:teuthology.orchestra.run.vm01.stdout:Transaction Summary 2026-03-10T09:59:43.361 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T09:59:43.361 INFO:teuthology.orchestra.run.vm01.stdout:Remove 8 Packages 2026-03-10T09:59:43.361 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:59:43.361 INFO:teuthology.orchestra.run.vm01.stdout:Freed space: 28 M 2026-03-10T09:59:43.361 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction check 2026-03-10T09:59:43.363 INFO:teuthology.orchestra.run.vm01.stdout:Transaction check succeeded. 2026-03-10T09:59:43.363 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction test 2026-03-10T09:59:43.370 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T09:59:43.371 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T09:59:43.371 INFO:teuthology.orchestra.run.vm08.stdout: Package Arch Version Repository Size 2026-03-10T09:59:43.371 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T09:59:43.371 INFO:teuthology.orchestra.run.vm08.stdout:Removing: 2026-03-10T09:59:43.371 INFO:teuthology.orchestra.run.vm08.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 0 2026-03-10T09:59:43.371 INFO:teuthology.orchestra.run.vm08.stdout:Removing unused dependencies: 2026-03-10T09:59:43.371 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 7.5 M 2026-03-10T09:59:43.371 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 18 M 2026-03-10T09:59:43.371 INFO:teuthology.orchestra.run.vm08.stdout: lua x86_64 5.4.4-4.el9 @appstream 593 k 2026-03-10T09:59:43.371 INFO:teuthology.orchestra.run.vm08.stdout: lua-devel x86_64 5.4.4-4.el9 @crb 49 k 2026-03-10T09:59:43.371 INFO:teuthology.orchestra.run.vm08.stdout: luarocks noarch 3.9.2-5.el9 @epel 692 k 2026-03-10T09:59:43.371 INFO:teuthology.orchestra.run.vm08.stdout: unzip x86_64 6.0-59.el9 @baseos 389 k 2026-03-10T09:59:43.371 INFO:teuthology.orchestra.run.vm08.stdout: zip x86_64 3.0-35.el9 @baseos 724 k 2026-03-10T09:59:43.371 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:59:43.371 INFO:teuthology.orchestra.run.vm08.stdout:Transaction Summary 2026-03-10T09:59:43.371 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T09:59:43.371 INFO:teuthology.orchestra.run.vm08.stdout:Remove 8 Packages 2026-03-10T09:59:43.371 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:59:43.371 INFO:teuthology.orchestra.run.vm08.stdout:Freed space: 28 M 2026-03-10T09:59:43.371 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction check 2026-03-10T09:59:43.374 INFO:teuthology.orchestra.run.vm08.stdout:Transaction check succeeded. 2026-03-10T09:59:43.374 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction test 2026-03-10T09:59:43.387 INFO:teuthology.orchestra.run.vm01.stdout:Transaction test succeeded. 2026-03-10T09:59:43.387 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction 2026-03-10T09:59:43.399 INFO:teuthology.orchestra.run.vm08.stdout:Transaction test succeeded. 2026-03-10T09:59:43.399 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction 2026-03-10T09:59:43.428 INFO:teuthology.orchestra.run.vm01.stdout: Preparing : 1/1 2026-03-10T09:59:43.433 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-10T09:59:43.434 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T09:59:43.435 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T09:59:43.435 INFO:teuthology.orchestra.run.vm02.stdout: Package Arch Version Repository Size 2026-03-10T09:59:43.435 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T09:59:43.435 INFO:teuthology.orchestra.run.vm02.stdout:Removing: 2026-03-10T09:59:43.435 INFO:teuthology.orchestra.run.vm02.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 0 2026-03-10T09:59:43.435 INFO:teuthology.orchestra.run.vm02.stdout:Removing unused dependencies: 2026-03-10T09:59:43.435 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 7.5 M 2026-03-10T09:59:43.435 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 18 M 2026-03-10T09:59:43.435 INFO:teuthology.orchestra.run.vm02.stdout: lua x86_64 5.4.4-4.el9 @appstream 593 k 2026-03-10T09:59:43.435 INFO:teuthology.orchestra.run.vm02.stdout: lua-devel x86_64 5.4.4-4.el9 @crb 49 k 2026-03-10T09:59:43.435 INFO:teuthology.orchestra.run.vm02.stdout: luarocks noarch 3.9.2-5.el9 @epel 692 k 2026-03-10T09:59:43.435 INFO:teuthology.orchestra.run.vm02.stdout: unzip x86_64 6.0-59.el9 @baseos 389 k 2026-03-10T09:59:43.435 INFO:teuthology.orchestra.run.vm02.stdout: zip x86_64 3.0-35.el9 @baseos 724 k 2026-03-10T09:59:43.435 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:59:43.435 INFO:teuthology.orchestra.run.vm02.stdout:Transaction Summary 2026-03-10T09:59:43.435 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T09:59:43.436 INFO:teuthology.orchestra.run.vm02.stdout:Remove 8 Packages 2026-03-10T09:59:43.436 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:59:43.436 INFO:teuthology.orchestra.run.vm02.stdout:Freed space: 28 M 2026-03-10T09:59:43.436 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction check 2026-03-10T09:59:43.436 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : luarocks-3.9.2-5.el9.noarch 2/8 2026-03-10T09:59:43.438 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : lua-devel-5.4.4-4.el9.x86_64 3/8 2026-03-10T09:59:43.439 INFO:teuthology.orchestra.run.vm02.stdout:Transaction check succeeded. 2026-03-10T09:59:43.439 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction test 2026-03-10T09:59:43.441 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : zip-3.0-35.el9.x86_64 4/8 2026-03-10T09:59:43.442 INFO:teuthology.orchestra.run.vm08.stdout: Preparing : 1/1 2026-03-10T09:59:43.444 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : unzip-6.0-59.el9.x86_64 5/8 2026-03-10T09:59:43.446 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : lua-5.4.4-4.el9.x86_64 6/8 2026-03-10T09:59:43.447 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-10T09:59:43.452 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : luarocks-3.9.2-5.el9.noarch 2/8 2026-03-10T09:59:43.454 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : lua-devel-5.4.4-4.el9.x86_64 3/8 2026-03-10T09:59:43.457 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : zip-3.0-35.el9.x86_64 4/8 2026-03-10T09:59:43.460 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : unzip-6.0-59.el9.x86_64 5/8 2026-03-10T09:59:43.462 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : lua-5.4.4-4.el9.x86_64 6/8 2026-03-10T09:59:43.463 INFO:teuthology.orchestra.run.vm02.stdout:Transaction test succeeded. 2026-03-10T09:59:43.464 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction 2026-03-10T09:59:43.469 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T09:59:43.469 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:59:43.469 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-10T09:59:43.469 INFO:teuthology.orchestra.run.vm01.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mds.target". 2026-03-10T09:59:43.469 INFO:teuthology.orchestra.run.vm01.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mds.target". 2026-03-10T09:59:43.469 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:59:43.470 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T09:59:43.478 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T09:59:43.485 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T09:59:43.485 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:59:43.485 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-10T09:59:43.485 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mds.target". 2026-03-10T09:59:43.485 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mds.target". 2026-03-10T09:59:43.485 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:59:43.486 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T09:59:43.504 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T09:59:43.504 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:59:43.504 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-10T09:59:43.504 INFO:teuthology.orchestra.run.vm01.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mon.target". 2026-03-10T09:59:43.504 INFO:teuthology.orchestra.run.vm01.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mon.target". 2026-03-10T09:59:43.504 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:59:43.506 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T09:59:43.509 INFO:teuthology.orchestra.run.vm02.stdout: Preparing : 1/1 2026-03-10T09:59:43.522 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T09:59:43.526 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-10T09:59:43.529 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : luarocks-3.9.2-5.el9.noarch 2/8 2026-03-10T09:59:43.532 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : lua-devel-5.4.4-4.el9.x86_64 3/8 2026-03-10T09:59:43.535 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : zip-3.0-35.el9.x86_64 4/8 2026-03-10T09:59:43.537 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : unzip-6.0-59.el9.x86_64 5/8 2026-03-10T09:59:43.539 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : lua-5.4.4-4.el9.x86_64 6/8 2026-03-10T09:59:43.545 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T09:59:43.545 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:59:43.545 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-10T09:59:43.545 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mon.target". 2026-03-10T09:59:43.545 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mon.target". 2026-03-10T09:59:43.545 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:59:43.547 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T09:59:43.561 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T09:59:43.561 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:59:43.561 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-10T09:59:43.561 INFO:teuthology.orchestra.run.vm02.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mds.target". 2026-03-10T09:59:43.561 INFO:teuthology.orchestra.run.vm02.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mds.target". 2026-03-10T09:59:43.561 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:59:43.562 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T09:59:43.570 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T09:59:43.595 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T09:59:43.596 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:59:43.596 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-10T09:59:43.596 INFO:teuthology.orchestra.run.vm02.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mon.target". 2026-03-10T09:59:43.596 INFO:teuthology.orchestra.run.vm02.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mon.target". 2026-03-10T09:59:43.596 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:59:43.597 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T09:59:43.606 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T09:59:43.606 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-10T09:59:43.606 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2/8 2026-03-10T09:59:43.606 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 3/8 2026-03-10T09:59:43.606 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : lua-5.4.4-4.el9.x86_64 4/8 2026-03-10T09:59:43.606 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 5/8 2026-03-10T09:59:43.606 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 6/8 2026-03-10T09:59:43.606 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : unzip-6.0-59.el9.x86_64 7/8 2026-03-10T09:59:43.632 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T09:59:43.633 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-10T09:59:43.633 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2/8 2026-03-10T09:59:43.633 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 3/8 2026-03-10T09:59:43.633 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : lua-5.4.4-4.el9.x86_64 4/8 2026-03-10T09:59:43.633 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 5/8 2026-03-10T09:59:43.633 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 6/8 2026-03-10T09:59:43.633 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : unzip-6.0-59.el9.x86_64 7/8 2026-03-10T09:59:43.653 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : zip-3.0-35.el9.x86_64 8/8 2026-03-10T09:59:43.653 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:59:43.653 INFO:teuthology.orchestra.run.vm01.stdout:Removed: 2026-03-10T09:59:43.653 INFO:teuthology.orchestra.run.vm01.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:43.653 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:43.653 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:43.653 INFO:teuthology.orchestra.run.vm01.stdout: lua-5.4.4-4.el9.x86_64 2026-03-10T09:59:43.653 INFO:teuthology.orchestra.run.vm01.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-10T09:59:43.653 INFO:teuthology.orchestra.run.vm01.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-10T09:59:43.653 INFO:teuthology.orchestra.run.vm01.stdout: unzip-6.0-59.el9.x86_64 2026-03-10T09:59:43.653 INFO:teuthology.orchestra.run.vm01.stdout: zip-3.0-35.el9.x86_64 2026-03-10T09:59:43.653 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:59:43.653 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T09:59:43.677 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : zip-3.0-35.el9.x86_64 8/8 2026-03-10T09:59:43.677 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:59:43.677 INFO:teuthology.orchestra.run.vm08.stdout:Removed: 2026-03-10T09:59:43.677 INFO:teuthology.orchestra.run.vm08.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:43.677 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:43.677 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:43.677 INFO:teuthology.orchestra.run.vm08.stdout: lua-5.4.4-4.el9.x86_64 2026-03-10T09:59:43.677 INFO:teuthology.orchestra.run.vm08.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-10T09:59:43.677 INFO:teuthology.orchestra.run.vm08.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-10T09:59:43.677 INFO:teuthology.orchestra.run.vm08.stdout: unzip-6.0-59.el9.x86_64 2026-03-10T09:59:43.677 INFO:teuthology.orchestra.run.vm08.stdout: zip-3.0-35.el9.x86_64 2026-03-10T09:59:43.677 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:59:43.677 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T09:59:43.692 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T09:59:43.692 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-10T09:59:43.692 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2/8 2026-03-10T09:59:43.692 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 3/8 2026-03-10T09:59:43.692 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : lua-5.4.4-4.el9.x86_64 4/8 2026-03-10T09:59:43.692 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 5/8 2026-03-10T09:59:43.692 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 6/8 2026-03-10T09:59:43.692 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : unzip-6.0-59.el9.x86_64 7/8 2026-03-10T09:59:43.741 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : zip-3.0-35.el9.x86_64 8/8 2026-03-10T09:59:43.741 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:59:43.741 INFO:teuthology.orchestra.run.vm02.stdout:Removed: 2026-03-10T09:59:43.741 INFO:teuthology.orchestra.run.vm02.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:43.741 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:43.741 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:43.741 INFO:teuthology.orchestra.run.vm02.stdout: lua-5.4.4-4.el9.x86_64 2026-03-10T09:59:43.741 INFO:teuthology.orchestra.run.vm02.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-10T09:59:43.741 INFO:teuthology.orchestra.run.vm02.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-10T09:59:43.741 INFO:teuthology.orchestra.run.vm02.stdout: unzip-6.0-59.el9.x86_64 2026-03-10T09:59:43.741 INFO:teuthology.orchestra.run.vm02.stdout: zip-3.0-35.el9.x86_64 2026-03-10T09:59:43.741 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:59:43.741 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T09:59:43.882 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T09:59:43.887 INFO:teuthology.orchestra.run.vm01.stdout:=========================================================================================== 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: Package Arch Version Repository Size 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout:=========================================================================================== 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout:Removing: 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 23 M 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout:Removing dependent packages: 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 431 k 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.4 M 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 806 k 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 88 M 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 66 M 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 563 k 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 59 M 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.4 M 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout:Removing unused dependencies: 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: abseil-cpp x86_64 20211102.0-4.el9 @epel 1.9 M 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 85 M 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 628 k 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.5 M 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 52 k 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 138 k 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: cryptsetup x86_64 2.8.1-3.el9 @baseos 770 k 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: flexiblas x86_64 3.0.4-9.el9 @appstream 68 k 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 @appstream 11 M 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 @appstream 39 k 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: gperftools-libs x86_64 2.9.1-3.el9 @epel 1.4 M 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: grpc-data noarch 1.46.7-10.el9 @epel 13 k 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: ledmon-libs x86_64 1.1.0-3.el9 @baseos 80 k 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 425 k 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: libconfig x86_64 1.7.2-9.el9 @baseos 220 k 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: libgfortran x86_64 11.5.0-14.el9 @baseos 2.8 M 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: liboath x86_64 2.6.12-1.el9 @epel 94 k 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: libquadmath x86_64 11.5.0-14.el9 @baseos 330 k 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.6 M 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: libstoragemgmt x86_64 1.10.1-1.el9 @appstream 685 k 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: libunwind x86_64 1.6.2-1.el9 @epel 170 k 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: openblas x86_64 0.3.29-1.el9 @appstream 112 k 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: openblas-openmp x86_64 0.3.29-1.el9 @appstream 46 M 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: pciutils x86_64 3.7.0-7.el9 @baseos 216 k 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: protobuf x86_64 3.14.0-17.el9 @appstream 3.5 M 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: protobuf-compiler x86_64 3.14.0-17.el9 @crb 2.9 M 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: python3-asyncssh noarch 2.13.2-5.el9 @epel 3.9 M 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: python3-autocommand noarch 2.2.2-8.el9 @epel 82 k 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: python3-babel noarch 2.9.1-2.el9 @appstream 27 M 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 @epel 254 k 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: python3-bcrypt x86_64 3.2.2-1.el9 @epel 87 k 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: python3-cachetools noarch 4.2.4-1.el9 @epel 93 k 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 702 k 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: python3-certifi noarch 2023.05.07-4.el9 @epel 6.3 k 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: python3-cffi x86_64 1.14.5-5.el9 @baseos 1.0 M 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: python3-chardet noarch 4.0.0-5.el9 @anaconda 1.4 M 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: python3-cheroot noarch 10.0.1-4.el9 @epel 682 k 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: python3-cherrypy noarch 18.6.1-2.el9 @epel 1.1 M 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: python3-cryptography x86_64 36.0.1-5.el9 @baseos 4.5 M 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: python3-devel x86_64 3.9.25-3.el9 @appstream 765 k 2026-03-10T09:59:43.888 INFO:teuthology.orchestra.run.vm01.stdout: python3-google-auth noarch 1:2.45.0-1.el9 @epel 1.4 M 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: python3-grpcio x86_64 1.46.7-10.el9 @epel 6.7 M 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 @epel 418 k 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: python3-idna noarch 2.10-7.el9.1 @anaconda 513 k 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco noarch 8.2.1-3.el9 @epel 3.7 k 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 @epel 24 k 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 @epel 55 k 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-context noarch 6.0.1-3.el9 @epel 31 k 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 @epel 33 k 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-text noarch 4.0.0-2.el9 @epel 51 k 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: python3-jinja2 noarch 2.11.3-8.el9 @appstream 1.1 M 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: python3-jsonpatch noarch 1.21-16.el9 @koji-override-0 55 k 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: python3-jsonpointer noarch 2.0-4.el9 @koji-override-0 34 k 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 @epel 21 M 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 @appstream 832 k 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: python3-logutils noarch 0.3.5-21.el9 @epel 126 k 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: python3-mako noarch 1.1.4-6.el9 @appstream 534 k 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: python3-markupsafe x86_64 1.1.1-12.el9 @appstream 60 k 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: python3-more-itertools noarch 8.12.0-2.el9 @epel 378 k 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: python3-natsort noarch 7.1.1-5.el9 @epel 215 k 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: python3-numpy x86_64 1:1.23.5-2.el9 @appstream 30 M 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 @appstream 1.7 M 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: python3-oauthlib noarch 3.1.1-5.el9 @koji-override-0 888 k 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: python3-packaging noarch 20.9-5.el9 @appstream 248 k 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: python3-pecan noarch 1.4.2-3.el9 @epel 1.3 M 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: python3-ply noarch 3.11-14.el9 @baseos 430 k 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: python3-portend noarch 3.1.0-2.el9 @epel 20 k 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: python3-prettytable noarch 0.7.2-27.el9 @koji-override-0 166 k 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: python3-protobuf noarch 3.14.0-17.el9 @appstream 1.4 M 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 @epel 389 k 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyasn1 noarch 0.4.8-7.el9 @appstream 622 k 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 @appstream 1.0 M 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: python3-pycparser noarch 2.20-6.el9 @baseos 745 k 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: python3-pysocks noarch 1.7.1-12.el9 @anaconda 88 k 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: python3-pytz noarch 2021.1-5.el9 @koji-override-0 176 k 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: python3-repoze-lru noarch 0.7-16.el9 @epel 83 k 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: python3-requests noarch 2.25.1-10.el9 @baseos 405 k 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 @appstream 119 k 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: python3-routes noarch 2.5.1-5.el9 @epel 459 k 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: python3-rsa noarch 4.9-2.el9 @epel 202 k 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: python3-scipy x86_64 1.9.3-2.el9 @appstream 76 M 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: python3-tempora noarch 5.0.0-2.el9 @epel 96 k 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: python3-toml noarch 0.10.2-6.el9 @appstream 99 k 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: python3-typing-extensions noarch 4.15.0-1.el9 @epel 447 k 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: python3-urllib3 noarch 1.26.5-7.el9 @baseos 746 k 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: python3-webob noarch 1.8.8-2.el9 @epel 1.2 M 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: python3-websocket-client noarch 1.2.3-2.el9 @epel 319 k 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 @epel 1.9 M 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: python3-zc-lockfile noarch 2.0-10.el9 @epel 35 k 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: qatlib x86_64 25.08.0-2.el9 @appstream 639 k 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: qatlib-service x86_64 25.08.0-2.el9 @appstream 69 k 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: qatzip-libs x86_64 1.3.1-1.el9 @appstream 148 k 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout:Transaction Summary 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout:=========================================================================================== 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout:Remove 102 Packages 2026-03-10T09:59:43.889 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:59:43.890 INFO:teuthology.orchestra.run.vm01.stdout:Freed space: 613 M 2026-03-10T09:59:43.890 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction check 2026-03-10T09:59:43.899 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T09:59:43.905 INFO:teuthology.orchestra.run.vm08.stdout:=========================================================================================== 2026-03-10T09:59:43.905 INFO:teuthology.orchestra.run.vm08.stdout: Package Arch Version Repository Size 2026-03-10T09:59:43.905 INFO:teuthology.orchestra.run.vm08.stdout:=========================================================================================== 2026-03-10T09:59:43.905 INFO:teuthology.orchestra.run.vm08.stdout:Removing: 2026-03-10T09:59:43.905 INFO:teuthology.orchestra.run.vm08.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 23 M 2026-03-10T09:59:43.905 INFO:teuthology.orchestra.run.vm08.stdout:Removing dependent packages: 2026-03-10T09:59:43.905 INFO:teuthology.orchestra.run.vm08.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 431 k 2026-03-10T09:59:43.905 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.4 M 2026-03-10T09:59:43.905 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 806 k 2026-03-10T09:59:43.905 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 88 M 2026-03-10T09:59:43.905 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 66 M 2026-03-10T09:59:43.905 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 563 k 2026-03-10T09:59:43.905 INFO:teuthology.orchestra.run.vm08.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 59 M 2026-03-10T09:59:43.905 INFO:teuthology.orchestra.run.vm08.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.4 M 2026-03-10T09:59:43.905 INFO:teuthology.orchestra.run.vm08.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-10T09:59:43.905 INFO:teuthology.orchestra.run.vm08.stdout:Removing unused dependencies: 2026-03-10T09:59:43.905 INFO:teuthology.orchestra.run.vm08.stdout: abseil-cpp x86_64 20211102.0-4.el9 @epel 1.9 M 2026-03-10T09:59:43.905 INFO:teuthology.orchestra.run.vm08.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 85 M 2026-03-10T09:59:43.905 INFO:teuthology.orchestra.run.vm08.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 628 k 2026-03-10T09:59:43.905 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.5 M 2026-03-10T09:59:43.905 INFO:teuthology.orchestra.run.vm08.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 52 k 2026-03-10T09:59:43.905 INFO:teuthology.orchestra.run.vm08.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 138 k 2026-03-10T09:59:43.905 INFO:teuthology.orchestra.run.vm08.stdout: cryptsetup x86_64 2.8.1-3.el9 @baseos 770 k 2026-03-10T09:59:43.905 INFO:teuthology.orchestra.run.vm08.stdout: flexiblas x86_64 3.0.4-9.el9 @appstream 68 k 2026-03-10T09:59:43.905 INFO:teuthology.orchestra.run.vm08.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 @appstream 11 M 2026-03-10T09:59:43.905 INFO:teuthology.orchestra.run.vm08.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 @appstream 39 k 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: gperftools-libs x86_64 2.9.1-3.el9 @epel 1.4 M 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: grpc-data noarch 1.46.7-10.el9 @epel 13 k 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: ledmon-libs x86_64 1.1.0-3.el9 @baseos 80 k 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 425 k 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: libconfig x86_64 1.7.2-9.el9 @baseos 220 k 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: libgfortran x86_64 11.5.0-14.el9 @baseos 2.8 M 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: liboath x86_64 2.6.12-1.el9 @epel 94 k 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: libquadmath x86_64 11.5.0-14.el9 @baseos 330 k 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.6 M 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: libstoragemgmt x86_64 1.10.1-1.el9 @appstream 685 k 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: libunwind x86_64 1.6.2-1.el9 @epel 170 k 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: openblas x86_64 0.3.29-1.el9 @appstream 112 k 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: openblas-openmp x86_64 0.3.29-1.el9 @appstream 46 M 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: pciutils x86_64 3.7.0-7.el9 @baseos 216 k 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: protobuf x86_64 3.14.0-17.el9 @appstream 3.5 M 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: protobuf-compiler x86_64 3.14.0-17.el9 @crb 2.9 M 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: python3-asyncssh noarch 2.13.2-5.el9 @epel 3.9 M 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: python3-autocommand noarch 2.2.2-8.el9 @epel 82 k 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: python3-babel noarch 2.9.1-2.el9 @appstream 27 M 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 @epel 254 k 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: python3-bcrypt x86_64 3.2.2-1.el9 @epel 87 k 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: python3-cachetools noarch 4.2.4-1.el9 @epel 93 k 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 702 k 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: python3-certifi noarch 2023.05.07-4.el9 @epel 6.3 k 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: python3-cffi x86_64 1.14.5-5.el9 @baseos 1.0 M 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: python3-chardet noarch 4.0.0-5.el9 @anaconda 1.4 M 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: python3-cheroot noarch 10.0.1-4.el9 @epel 682 k 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: python3-cherrypy noarch 18.6.1-2.el9 @epel 1.1 M 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: python3-cryptography x86_64 36.0.1-5.el9 @baseos 4.5 M 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: python3-devel x86_64 3.9.25-3.el9 @appstream 765 k 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: python3-google-auth noarch 1:2.45.0-1.el9 @epel 1.4 M 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: python3-grpcio x86_64 1.46.7-10.el9 @epel 6.7 M 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 @epel 418 k 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: python3-idna noarch 2.10-7.el9.1 @anaconda 513 k 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco noarch 8.2.1-3.el9 @epel 3.7 k 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 @epel 24 k 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 @epel 55 k 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-context noarch 6.0.1-3.el9 @epel 31 k 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 @epel 33 k 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-text noarch 4.0.0-2.el9 @epel 51 k 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: python3-jinja2 noarch 2.11.3-8.el9 @appstream 1.1 M 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: python3-jsonpatch noarch 1.21-16.el9 @koji-override-0 55 k 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: python3-jsonpointer noarch 2.0-4.el9 @koji-override-0 34 k 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 @epel 21 M 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 @appstream 832 k 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: python3-logutils noarch 0.3.5-21.el9 @epel 126 k 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: python3-mako noarch 1.1.4-6.el9 @appstream 534 k 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: python3-markupsafe x86_64 1.1.1-12.el9 @appstream 60 k 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: python3-more-itertools noarch 8.12.0-2.el9 @epel 378 k 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: python3-natsort noarch 7.1.1-5.el9 @epel 215 k 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: python3-numpy x86_64 1:1.23.5-2.el9 @appstream 30 M 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 @appstream 1.7 M 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: python3-oauthlib noarch 3.1.1-5.el9 @koji-override-0 888 k 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: python3-packaging noarch 20.9-5.el9 @appstream 248 k 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: python3-pecan noarch 1.4.2-3.el9 @epel 1.3 M 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: python3-ply noarch 3.11-14.el9 @baseos 430 k 2026-03-10T09:59:43.906 INFO:teuthology.orchestra.run.vm08.stdout: python3-portend noarch 3.1.0-2.el9 @epel 20 k 2026-03-10T09:59:43.907 INFO:teuthology.orchestra.run.vm08.stdout: python3-prettytable noarch 0.7.2-27.el9 @koji-override-0 166 k 2026-03-10T09:59:43.907 INFO:teuthology.orchestra.run.vm08.stdout: python3-protobuf noarch 3.14.0-17.el9 @appstream 1.4 M 2026-03-10T09:59:43.907 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 @epel 389 k 2026-03-10T09:59:43.907 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyasn1 noarch 0.4.8-7.el9 @appstream 622 k 2026-03-10T09:59:43.907 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 @appstream 1.0 M 2026-03-10T09:59:43.907 INFO:teuthology.orchestra.run.vm08.stdout: python3-pycparser noarch 2.20-6.el9 @baseos 745 k 2026-03-10T09:59:43.907 INFO:teuthology.orchestra.run.vm08.stdout: python3-pysocks noarch 1.7.1-12.el9 @anaconda 88 k 2026-03-10T09:59:43.907 INFO:teuthology.orchestra.run.vm08.stdout: python3-pytz noarch 2021.1-5.el9 @koji-override-0 176 k 2026-03-10T09:59:43.907 INFO:teuthology.orchestra.run.vm08.stdout: python3-repoze-lru noarch 0.7-16.el9 @epel 83 k 2026-03-10T09:59:43.907 INFO:teuthology.orchestra.run.vm08.stdout: python3-requests noarch 2.25.1-10.el9 @baseos 405 k 2026-03-10T09:59:43.907 INFO:teuthology.orchestra.run.vm08.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 @appstream 119 k 2026-03-10T09:59:43.907 INFO:teuthology.orchestra.run.vm08.stdout: python3-routes noarch 2.5.1-5.el9 @epel 459 k 2026-03-10T09:59:43.907 INFO:teuthology.orchestra.run.vm08.stdout: python3-rsa noarch 4.9-2.el9 @epel 202 k 2026-03-10T09:59:43.907 INFO:teuthology.orchestra.run.vm08.stdout: python3-scipy x86_64 1.9.3-2.el9 @appstream 76 M 2026-03-10T09:59:43.907 INFO:teuthology.orchestra.run.vm08.stdout: python3-tempora noarch 5.0.0-2.el9 @epel 96 k 2026-03-10T09:59:43.907 INFO:teuthology.orchestra.run.vm08.stdout: python3-toml noarch 0.10.2-6.el9 @appstream 99 k 2026-03-10T09:59:43.907 INFO:teuthology.orchestra.run.vm08.stdout: python3-typing-extensions noarch 4.15.0-1.el9 @epel 447 k 2026-03-10T09:59:43.907 INFO:teuthology.orchestra.run.vm08.stdout: python3-urllib3 noarch 1.26.5-7.el9 @baseos 746 k 2026-03-10T09:59:43.907 INFO:teuthology.orchestra.run.vm08.stdout: python3-webob noarch 1.8.8-2.el9 @epel 1.2 M 2026-03-10T09:59:43.907 INFO:teuthology.orchestra.run.vm08.stdout: python3-websocket-client noarch 1.2.3-2.el9 @epel 319 k 2026-03-10T09:59:43.907 INFO:teuthology.orchestra.run.vm08.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 @epel 1.9 M 2026-03-10T09:59:43.907 INFO:teuthology.orchestra.run.vm08.stdout: python3-zc-lockfile noarch 2.0-10.el9 @epel 35 k 2026-03-10T09:59:43.907 INFO:teuthology.orchestra.run.vm08.stdout: qatlib x86_64 25.08.0-2.el9 @appstream 639 k 2026-03-10T09:59:43.907 INFO:teuthology.orchestra.run.vm08.stdout: qatlib-service x86_64 25.08.0-2.el9 @appstream 69 k 2026-03-10T09:59:43.907 INFO:teuthology.orchestra.run.vm08.stdout: qatzip-libs x86_64 1.3.1-1.el9 @appstream 148 k 2026-03-10T09:59:43.907 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:59:43.907 INFO:teuthology.orchestra.run.vm08.stdout:Transaction Summary 2026-03-10T09:59:43.907 INFO:teuthology.orchestra.run.vm08.stdout:=========================================================================================== 2026-03-10T09:59:43.907 INFO:teuthology.orchestra.run.vm08.stdout:Remove 102 Packages 2026-03-10T09:59:43.907 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:59:43.907 INFO:teuthology.orchestra.run.vm08.stdout:Freed space: 613 M 2026-03-10T09:59:43.907 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction check 2026-03-10T09:59:43.916 INFO:teuthology.orchestra.run.vm01.stdout:Transaction check succeeded. 2026-03-10T09:59:43.916 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction test 2026-03-10T09:59:43.933 INFO:teuthology.orchestra.run.vm08.stdout:Transaction check succeeded. 2026-03-10T09:59:43.933 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction test 2026-03-10T09:59:43.956 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T09:59:43.961 INFO:teuthology.orchestra.run.vm02.stdout:=========================================================================================== 2026-03-10T09:59:43.961 INFO:teuthology.orchestra.run.vm02.stdout: Package Arch Version Repository Size 2026-03-10T09:59:43.961 INFO:teuthology.orchestra.run.vm02.stdout:=========================================================================================== 2026-03-10T09:59:43.961 INFO:teuthology.orchestra.run.vm02.stdout:Removing: 2026-03-10T09:59:43.961 INFO:teuthology.orchestra.run.vm02.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 23 M 2026-03-10T09:59:43.961 INFO:teuthology.orchestra.run.vm02.stdout:Removing dependent packages: 2026-03-10T09:59:43.961 INFO:teuthology.orchestra.run.vm02.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 431 k 2026-03-10T09:59:43.961 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.4 M 2026-03-10T09:59:43.961 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 806 k 2026-03-10T09:59:43.961 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 88 M 2026-03-10T09:59:43.961 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 66 M 2026-03-10T09:59:43.961 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 563 k 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 59 M 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.4 M 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout:Removing unused dependencies: 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: abseil-cpp x86_64 20211102.0-4.el9 @epel 1.9 M 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 85 M 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 628 k 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.5 M 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 52 k 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 138 k 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: cryptsetup x86_64 2.8.1-3.el9 @baseos 770 k 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: flexiblas x86_64 3.0.4-9.el9 @appstream 68 k 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 @appstream 11 M 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 @appstream 39 k 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: gperftools-libs x86_64 2.9.1-3.el9 @epel 1.4 M 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: grpc-data noarch 1.46.7-10.el9 @epel 13 k 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: ledmon-libs x86_64 1.1.0-3.el9 @baseos 80 k 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 425 k 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: libconfig x86_64 1.7.2-9.el9 @baseos 220 k 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: libgfortran x86_64 11.5.0-14.el9 @baseos 2.8 M 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: liboath x86_64 2.6.12-1.el9 @epel 94 k 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: libquadmath x86_64 11.5.0-14.el9 @baseos 330 k 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.6 M 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: libstoragemgmt x86_64 1.10.1-1.el9 @appstream 685 k 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: libunwind x86_64 1.6.2-1.el9 @epel 170 k 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: openblas x86_64 0.3.29-1.el9 @appstream 112 k 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: openblas-openmp x86_64 0.3.29-1.el9 @appstream 46 M 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: pciutils x86_64 3.7.0-7.el9 @baseos 216 k 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: protobuf x86_64 3.14.0-17.el9 @appstream 3.5 M 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: protobuf-compiler x86_64 3.14.0-17.el9 @crb 2.9 M 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: python3-asyncssh noarch 2.13.2-5.el9 @epel 3.9 M 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: python3-autocommand noarch 2.2.2-8.el9 @epel 82 k 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: python3-babel noarch 2.9.1-2.el9 @appstream 27 M 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 @epel 254 k 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: python3-bcrypt x86_64 3.2.2-1.el9 @epel 87 k 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: python3-cachetools noarch 4.2.4-1.el9 @epel 93 k 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 702 k 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: python3-certifi noarch 2023.05.07-4.el9 @epel 6.3 k 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: python3-cffi x86_64 1.14.5-5.el9 @baseos 1.0 M 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: python3-chardet noarch 4.0.0-5.el9 @anaconda 1.4 M 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: python3-cheroot noarch 10.0.1-4.el9 @epel 682 k 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: python3-cherrypy noarch 18.6.1-2.el9 @epel 1.1 M 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: python3-cryptography x86_64 36.0.1-5.el9 @baseos 4.5 M 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: python3-devel x86_64 3.9.25-3.el9 @appstream 765 k 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: python3-google-auth noarch 1:2.45.0-1.el9 @epel 1.4 M 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: python3-grpcio x86_64 1.46.7-10.el9 @epel 6.7 M 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 @epel 418 k 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: python3-idna noarch 2.10-7.el9.1 @anaconda 513 k 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco noarch 8.2.1-3.el9 @epel 3.7 k 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 @epel 24 k 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 @epel 55 k 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-context noarch 6.0.1-3.el9 @epel 31 k 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 @epel 33 k 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-text noarch 4.0.0-2.el9 @epel 51 k 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: python3-jinja2 noarch 2.11.3-8.el9 @appstream 1.1 M 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: python3-jsonpatch noarch 1.21-16.el9 @koji-override-0 55 k 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: python3-jsonpointer noarch 2.0-4.el9 @koji-override-0 34 k 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 @epel 21 M 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 @appstream 832 k 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: python3-logutils noarch 0.3.5-21.el9 @epel 126 k 2026-03-10T09:59:43.962 INFO:teuthology.orchestra.run.vm02.stdout: python3-mako noarch 1.1.4-6.el9 @appstream 534 k 2026-03-10T09:59:43.963 INFO:teuthology.orchestra.run.vm02.stdout: python3-markupsafe x86_64 1.1.1-12.el9 @appstream 60 k 2026-03-10T09:59:43.963 INFO:teuthology.orchestra.run.vm02.stdout: python3-more-itertools noarch 8.12.0-2.el9 @epel 378 k 2026-03-10T09:59:43.963 INFO:teuthology.orchestra.run.vm02.stdout: python3-natsort noarch 7.1.1-5.el9 @epel 215 k 2026-03-10T09:59:43.963 INFO:teuthology.orchestra.run.vm02.stdout: python3-numpy x86_64 1:1.23.5-2.el9 @appstream 30 M 2026-03-10T09:59:43.963 INFO:teuthology.orchestra.run.vm02.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 @appstream 1.7 M 2026-03-10T09:59:43.963 INFO:teuthology.orchestra.run.vm02.stdout: python3-oauthlib noarch 3.1.1-5.el9 @koji-override-0 888 k 2026-03-10T09:59:43.963 INFO:teuthology.orchestra.run.vm02.stdout: python3-packaging noarch 20.9-5.el9 @appstream 248 k 2026-03-10T09:59:43.963 INFO:teuthology.orchestra.run.vm02.stdout: python3-pecan noarch 1.4.2-3.el9 @epel 1.3 M 2026-03-10T09:59:43.963 INFO:teuthology.orchestra.run.vm02.stdout: python3-ply noarch 3.11-14.el9 @baseos 430 k 2026-03-10T09:59:43.963 INFO:teuthology.orchestra.run.vm02.stdout: python3-portend noarch 3.1.0-2.el9 @epel 20 k 2026-03-10T09:59:43.963 INFO:teuthology.orchestra.run.vm02.stdout: python3-prettytable noarch 0.7.2-27.el9 @koji-override-0 166 k 2026-03-10T09:59:43.963 INFO:teuthology.orchestra.run.vm02.stdout: python3-protobuf noarch 3.14.0-17.el9 @appstream 1.4 M 2026-03-10T09:59:43.963 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 @epel 389 k 2026-03-10T09:59:43.963 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyasn1 noarch 0.4.8-7.el9 @appstream 622 k 2026-03-10T09:59:43.963 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 @appstream 1.0 M 2026-03-10T09:59:43.963 INFO:teuthology.orchestra.run.vm02.stdout: python3-pycparser noarch 2.20-6.el9 @baseos 745 k 2026-03-10T09:59:43.963 INFO:teuthology.orchestra.run.vm02.stdout: python3-pysocks noarch 1.7.1-12.el9 @anaconda 88 k 2026-03-10T09:59:43.963 INFO:teuthology.orchestra.run.vm02.stdout: python3-pytz noarch 2021.1-5.el9 @koji-override-0 176 k 2026-03-10T09:59:43.963 INFO:teuthology.orchestra.run.vm02.stdout: python3-repoze-lru noarch 0.7-16.el9 @epel 83 k 2026-03-10T09:59:43.963 INFO:teuthology.orchestra.run.vm02.stdout: python3-requests noarch 2.25.1-10.el9 @baseos 405 k 2026-03-10T09:59:43.963 INFO:teuthology.orchestra.run.vm02.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 @appstream 119 k 2026-03-10T09:59:43.963 INFO:teuthology.orchestra.run.vm02.stdout: python3-routes noarch 2.5.1-5.el9 @epel 459 k 2026-03-10T09:59:43.963 INFO:teuthology.orchestra.run.vm02.stdout: python3-rsa noarch 4.9-2.el9 @epel 202 k 2026-03-10T09:59:43.963 INFO:teuthology.orchestra.run.vm02.stdout: python3-scipy x86_64 1.9.3-2.el9 @appstream 76 M 2026-03-10T09:59:43.963 INFO:teuthology.orchestra.run.vm02.stdout: python3-tempora noarch 5.0.0-2.el9 @epel 96 k 2026-03-10T09:59:43.963 INFO:teuthology.orchestra.run.vm02.stdout: python3-toml noarch 0.10.2-6.el9 @appstream 99 k 2026-03-10T09:59:43.963 INFO:teuthology.orchestra.run.vm02.stdout: python3-typing-extensions noarch 4.15.0-1.el9 @epel 447 k 2026-03-10T09:59:43.963 INFO:teuthology.orchestra.run.vm02.stdout: python3-urllib3 noarch 1.26.5-7.el9 @baseos 746 k 2026-03-10T09:59:43.963 INFO:teuthology.orchestra.run.vm02.stdout: python3-webob noarch 1.8.8-2.el9 @epel 1.2 M 2026-03-10T09:59:43.963 INFO:teuthology.orchestra.run.vm02.stdout: python3-websocket-client noarch 1.2.3-2.el9 @epel 319 k 2026-03-10T09:59:43.963 INFO:teuthology.orchestra.run.vm02.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 @epel 1.9 M 2026-03-10T09:59:43.963 INFO:teuthology.orchestra.run.vm02.stdout: python3-zc-lockfile noarch 2.0-10.el9 @epel 35 k 2026-03-10T09:59:43.963 INFO:teuthology.orchestra.run.vm02.stdout: qatlib x86_64 25.08.0-2.el9 @appstream 639 k 2026-03-10T09:59:43.963 INFO:teuthology.orchestra.run.vm02.stdout: qatlib-service x86_64 25.08.0-2.el9 @appstream 69 k 2026-03-10T09:59:43.963 INFO:teuthology.orchestra.run.vm02.stdout: qatzip-libs x86_64 1.3.1-1.el9 @appstream 148 k 2026-03-10T09:59:43.963 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:59:43.963 INFO:teuthology.orchestra.run.vm02.stdout:Transaction Summary 2026-03-10T09:59:43.963 INFO:teuthology.orchestra.run.vm02.stdout:=========================================================================================== 2026-03-10T09:59:43.963 INFO:teuthology.orchestra.run.vm02.stdout:Remove 102 Packages 2026-03-10T09:59:43.963 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:59:43.963 INFO:teuthology.orchestra.run.vm02.stdout:Freed space: 613 M 2026-03-10T09:59:43.963 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction check 2026-03-10T09:59:43.988 INFO:teuthology.orchestra.run.vm02.stdout:Transaction check succeeded. 2026-03-10T09:59:43.988 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction test 2026-03-10T09:59:44.021 INFO:teuthology.orchestra.run.vm01.stdout:Transaction test succeeded. 2026-03-10T09:59:44.021 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction 2026-03-10T09:59:44.037 INFO:teuthology.orchestra.run.vm08.stdout:Transaction test succeeded. 2026-03-10T09:59:44.038 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction 2026-03-10T09:59:44.094 INFO:teuthology.orchestra.run.vm02.stdout:Transaction test succeeded. 2026-03-10T09:59:44.095 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction 2026-03-10T09:59:44.166 INFO:teuthology.orchestra.run.vm01.stdout: Preparing : 1/1 2026-03-10T09:59:44.166 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/102 2026-03-10T09:59:44.173 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/102 2026-03-10T09:59:44.181 INFO:teuthology.orchestra.run.vm08.stdout: Preparing : 1/1 2026-03-10T09:59:44.181 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/102 2026-03-10T09:59:44.190 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/102 2026-03-10T09:59:44.196 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T09:59:44.196 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:59:44.196 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-10T09:59:44.196 INFO:teuthology.orchestra.run.vm01.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mgr.target". 2026-03-10T09:59:44.196 INFO:teuthology.orchestra.run.vm01.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mgr.target". 2026-03-10T09:59:44.196 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:59:44.196 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T09:59:44.212 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T09:59:44.212 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T09:59:44.213 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:59:44.213 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-10T09:59:44.213 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mgr.target". 2026-03-10T09:59:44.213 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mgr.target". 2026-03-10T09:59:44.213 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:59:44.213 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T09:59:44.228 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T09:59:44.238 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 3/102 2026-03-10T09:59:44.238 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/102 2026-03-10T09:59:44.241 INFO:teuthology.orchestra.run.vm02.stdout: Preparing : 1/1 2026-03-10T09:59:44.241 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/102 2026-03-10T09:59:44.250 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/102 2026-03-10T09:59:44.252 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 3/102 2026-03-10T09:59:44.252 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/102 2026-03-10T09:59:44.270 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T09:59:44.270 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:59:44.270 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-10T09:59:44.271 INFO:teuthology.orchestra.run.vm02.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mgr.target". 2026-03-10T09:59:44.271 INFO:teuthology.orchestra.run.vm02.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mgr.target". 2026-03-10T09:59:44.271 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:59:44.271 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T09:59:44.284 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T09:59:44.298 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/102 2026-03-10T09:59:44.307 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-kubernetes-1:26.1.0-3.el9.noarch 5/102 2026-03-10T09:59:44.308 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 3/102 2026-03-10T09:59:44.308 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/102 2026-03-10T09:59:44.311 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/102 2026-03-10T09:59:44.312 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-requests-oauthlib-1.3.0-12.el9.noarch 6/102 2026-03-10T09:59:44.312 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-10T09:59:44.320 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-kubernetes-1:26.1.0-3.el9.noarch 5/102 2026-03-10T09:59:44.324 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-10T09:59:44.324 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-requests-oauthlib-1.3.0-12.el9.noarch 6/102 2026-03-10T09:59:44.324 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-10T09:59:44.331 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-cherrypy-18.6.1-2.el9.noarch 8/102 2026-03-10T09:59:44.335 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-cheroot-10.0.1-4.el9.noarch 9/102 2026-03-10T09:59:44.339 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-10T09:59:44.344 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-grpcio-tools-1.46.7-10.el9.x86_64 10/102 2026-03-10T09:59:44.348 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-grpcio-1.46.7-10.el9.x86_64 11/102 2026-03-10T09:59:44.348 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-cherrypy-18.6.1-2.el9.noarch 8/102 2026-03-10T09:59:44.354 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-cheroot-10.0.1-4.el9.noarch 9/102 2026-03-10T09:59:44.362 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-grpcio-tools-1.46.7-10.el9.x86_64 10/102 2026-03-10T09:59:44.366 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-grpcio-1.46.7-10.el9.x86_64 11/102 2026-03-10T09:59:44.370 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T09:59:44.370 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:59:44.370 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-10T09:59:44.370 INFO:teuthology.orchestra.run.vm01.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-osd.target". 2026-03-10T09:59:44.370 INFO:teuthology.orchestra.run.vm01.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-osd.target". 2026-03-10T09:59:44.370 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:59:44.371 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/102 2026-03-10T09:59:44.378 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T09:59:44.381 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-kubernetes-1:26.1.0-3.el9.noarch 5/102 2026-03-10T09:59:44.387 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-requests-oauthlib-1.3.0-12.el9.noarch 6/102 2026-03-10T09:59:44.387 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-10T09:59:44.390 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T09:59:44.391 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T09:59:44.391 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:59:44.391 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-10T09:59:44.391 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-osd.target". 2026-03-10T09:59:44.391 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-osd.target". 2026-03-10T09:59:44.391 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:59:44.399 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T09:59:44.400 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-10T09:59:44.408 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-cherrypy-18.6.1-2.el9.noarch 8/102 2026-03-10T09:59:44.408 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-10T09:59:44.408 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:59:44.408 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-10T09:59:44.408 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:59:44.410 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T09:59:44.412 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-cheroot-10.0.1-4.el9.noarch 9/102 2026-03-10T09:59:44.417 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-10T09:59:44.421 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-grpcio-tools-1.46.7-10.el9.x86_64 10/102 2026-03-10T09:59:44.426 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-grpcio-1.46.7-10.el9.x86_64 11/102 2026-03-10T09:59:44.428 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-10T09:59:44.429 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-10T09:59:44.429 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:59:44.429 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-10T09:59:44.429 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:59:44.431 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-jaraco-collections-3.0.0-8.el9.noarch 14/102 2026-03-10T09:59:44.436 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-jaraco-text-4.0.0-2.el9.noarch 15/102 2026-03-10T09:59:44.437 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-10T09:59:44.441 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-jinja2-2.11.3-8.el9.noarch 16/102 2026-03-10T09:59:44.448 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T09:59:44.448 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:59:44.448 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-10T09:59:44.448 INFO:teuthology.orchestra.run.vm02.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-osd.target". 2026-03-10T09:59:44.448 INFO:teuthology.orchestra.run.vm02.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-osd.target". 2026-03-10T09:59:44.448 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:59:44.448 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-10T09:59:44.450 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-requests-2.25.1-10.el9.noarch 17/102 2026-03-10T09:59:44.450 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-jaraco-collections-3.0.0-8.el9.noarch 14/102 2026-03-10T09:59:44.454 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T09:59:44.455 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-jaraco-text-4.0.0-2.el9.noarch 15/102 2026-03-10T09:59:44.459 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-jinja2-2.11.3-8.el9.noarch 16/102 2026-03-10T09:59:44.463 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-google-auth-1:2.45.0-1.el9.noarch 18/102 2026-03-10T09:59:44.464 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T09:59:44.468 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-requests-2.25.1-10.el9.noarch 17/102 2026-03-10T09:59:44.469 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-pecan-1.4.2-3.el9.noarch 19/102 2026-03-10T09:59:44.479 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-rsa-4.9-2.el9.noarch 20/102 2026-03-10T09:59:44.481 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-google-auth-1:2.45.0-1.el9.noarch 18/102 2026-03-10T09:59:44.482 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-10T09:59:44.482 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:59:44.482 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-10T09:59:44.482 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:59:44.485 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-pyasn1-modules-0.4.8-7.el9.noarch 21/102 2026-03-10T09:59:44.487 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-pecan-1.4.2-3.el9.noarch 19/102 2026-03-10T09:59:44.491 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-10T09:59:44.497 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-rsa-4.9-2.el9.noarch 20/102 2026-03-10T09:59:44.502 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-10T09:59:44.504 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-jaraco-collections-3.0.0-8.el9.noarch 14/102 2026-03-10T09:59:44.505 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-pyasn1-modules-0.4.8-7.el9.noarch 21/102 2026-03-10T09:59:44.510 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-jaraco-text-4.0.0-2.el9.noarch 15/102 2026-03-10T09:59:44.514 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-jinja2-2.11.3-8.el9.noarch 16/102 2026-03-10T09:59:44.515 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-urllib3-1.26.5-7.el9.noarch 22/102 2026-03-10T09:59:44.521 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-babel-2.9.1-2.el9.noarch 23/102 2026-03-10T09:59:44.523 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-requests-2.25.1-10.el9.noarch 17/102 2026-03-10T09:59:44.524 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-jaraco-classes-3.2.1-5.el9.noarch 24/102 2026-03-10T09:59:44.533 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-pyOpenSSL-21.0.0-1.el9.noarch 25/102 2026-03-10T09:59:44.534 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-urllib3-1.26.5-7.el9.noarch 22/102 2026-03-10T09:59:44.536 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-google-auth-1:2.45.0-1.el9.noarch 18/102 2026-03-10T09:59:44.542 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-babel-2.9.1-2.el9.noarch 23/102 2026-03-10T09:59:44.544 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-pecan-1.4.2-3.el9.noarch 19/102 2026-03-10T09:59:44.544 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-asyncssh-2.13.2-5.el9.noarch 26/102 2026-03-10T09:59:44.544 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/102 2026-03-10T09:59:44.545 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-jaraco-classes-3.2.1-5.el9.noarch 24/102 2026-03-10T09:59:44.552 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/102 2026-03-10T09:59:44.554 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-rsa-4.9-2.el9.noarch 20/102 2026-03-10T09:59:44.554 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-pyOpenSSL-21.0.0-1.el9.noarch 25/102 2026-03-10T09:59:44.561 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-pyasn1-modules-0.4.8-7.el9.noarch 21/102 2026-03-10T09:59:44.565 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-asyncssh-2.13.2-5.el9.noarch 26/102 2026-03-10T09:59:44.565 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/102 2026-03-10T09:59:44.572 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/102 2026-03-10T09:59:44.591 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-urllib3-1.26.5-7.el9.noarch 22/102 2026-03-10T09:59:44.597 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-babel-2.9.1-2.el9.noarch 23/102 2026-03-10T09:59:44.600 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-jaraco-classes-3.2.1-5.el9.noarch 24/102 2026-03-10T09:59:44.609 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-pyOpenSSL-21.0.0-1.el9.noarch 25/102 2026-03-10T09:59:44.620 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-asyncssh-2.13.2-5.el9.noarch 26/102 2026-03-10T09:59:44.620 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/102 2026-03-10T09:59:44.628 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/102 2026-03-10T09:59:44.645 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-jsonpatch-1.21-16.el9.noarch 28/102 2026-03-10T09:59:44.661 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-scipy-1.9.3-2.el9.x86_64 29/102 2026-03-10T09:59:44.664 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-jsonpatch-1.21-16.el9.noarch 28/102 2026-03-10T09:59:44.675 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-10T09:59:44.676 INFO:teuthology.orchestra.run.vm01.stdout:Removed "/etc/systemd/system/multi-user.target.wants/libstoragemgmt.service". 2026-03-10T09:59:44.676 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:59:44.676 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-10T09:59:44.680 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-scipy-1.9.3-2.el9.x86_64 29/102 2026-03-10T09:59:44.693 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-10T09:59:44.693 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/multi-user.target.wants/libstoragemgmt.service". 2026-03-10T09:59:44.693 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:59:44.694 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-10T09:59:44.705 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-10T09:59:44.720 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-10T09:59:44.721 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-jsonpatch-1.21-16.el9.noarch 28/102 2026-03-10T09:59:44.721 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 31/102 2026-03-10T09:59:44.726 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-cryptography-36.0.1-5.el9.x86_64 32/102 2026-03-10T09:59:44.728 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : protobuf-compiler-3.14.0-17.el9.x86_64 33/102 2026-03-10T09:59:44.732 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-bcrypt-3.2.2-1.el9.x86_64 34/102 2026-03-10T09:59:44.736 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-scipy-1.9.3-2.el9.x86_64 29/102 2026-03-10T09:59:44.736 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 31/102 2026-03-10T09:59:44.743 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-cryptography-36.0.1-5.el9.x86_64 32/102 2026-03-10T09:59:44.746 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : protobuf-compiler-3.14.0-17.el9.x86_64 33/102 2026-03-10T09:59:44.748 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-bcrypt-3.2.2-1.el9.x86_64 34/102 2026-03-10T09:59:44.750 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-10T09:59:44.750 INFO:teuthology.orchestra.run.vm02.stdout:Removed "/etc/systemd/system/multi-user.target.wants/libstoragemgmt.service". 2026-03-10T09:59:44.751 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:59:44.752 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-10T09:59:44.752 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-10T09:59:44.752 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:59:44.752 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-10T09:59:44.752 INFO:teuthology.orchestra.run.vm01.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target". 2026-03-10T09:59:44.752 INFO:teuthology.orchestra.run.vm01.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target". 2026-03-10T09:59:44.752 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:59:44.753 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-10T09:59:44.765 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-10T09:59:44.769 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-mako-1.1.4-6.el9.noarch 36/102 2026-03-10T09:59:44.772 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-jaraco-context-6.0.1-3.el9.noarch 37/102 2026-03-10T09:59:44.772 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-10T09:59:44.772 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:59:44.772 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-10T09:59:44.772 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target". 2026-03-10T09:59:44.772 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target". 2026-03-10T09:59:44.772 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:59:44.774 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-10T09:59:44.774 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-portend-3.1.0-2.el9.noarch 38/102 2026-03-10T09:59:44.778 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-tempora-5.0.0-2.el9.noarch 39/102 2026-03-10T09:59:44.779 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-10T09:59:44.781 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-jaraco-functools-3.5.0-2.el9.noarch 40/102 2026-03-10T09:59:44.786 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-10T09:59:44.788 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-routes-2.5.1-5.el9.noarch 41/102 2026-03-10T09:59:44.790 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-mako-1.1.4-6.el9.noarch 36/102 2026-03-10T09:59:44.793 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-jaraco-context-6.0.1-3.el9.noarch 37/102 2026-03-10T09:59:44.793 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-cffi-1.14.5-5.el9.x86_64 42/102 2026-03-10T09:59:44.795 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-portend-3.1.0-2.el9.noarch 38/102 2026-03-10T09:59:44.798 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 31/102 2026-03-10T09:59:44.798 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-tempora-5.0.0-2.el9.noarch 39/102 2026-03-10T09:59:44.832 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-cryptography-36.0.1-5.el9.x86_64 32/102 2026-03-10T09:59:44.832 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-jaraco-functools-3.5.0-2.el9.noarch 40/102 2026-03-10T09:59:44.840 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-pycparser-2.20-6.el9.noarch 43/102 2026-03-10T09:59:44.851 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : protobuf-compiler-3.14.0-17.el9.x86_64 33/102 2026-03-10T09:59:44.852 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-routes-2.5.1-5.el9.noarch 41/102 2026-03-10T09:59:44.854 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-bcrypt-3.2.2-1.el9.x86_64 34/102 2026-03-10T09:59:44.855 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-numpy-1:1.23.5-2.el9.x86_64 44/102 2026-03-10T09:59:44.858 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-cffi-1.14.5-5.el9.x86_64 42/102 2026-03-10T09:59:44.858 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : flexiblas-netlib-3.0.4-9.el9.x86_64 45/102 2026-03-10T09:59:44.864 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 46/102 2026-03-10T09:59:44.868 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : openblas-openmp-0.3.29-1.el9.x86_64 47/102 2026-03-10T09:59:44.871 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : libgfortran-11.5.0-14.el9.x86_64 48/102 2026-03-10T09:59:44.874 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 49/102 2026-03-10T09:59:44.875 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-10T09:59:44.875 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:59:44.875 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-10T09:59:44.875 INFO:teuthology.orchestra.run.vm02.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target". 2026-03-10T09:59:44.875 INFO:teuthology.orchestra.run.vm02.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target". 2026-03-10T09:59:44.875 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:59:44.876 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-10T09:59:44.889 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-10T09:59:44.893 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-mako-1.1.4-6.el9.noarch 36/102 2026-03-10T09:59:44.895 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-jaraco-context-6.0.1-3.el9.noarch 37/102 2026-03-10T09:59:44.897 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-10T09:59:44.897 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:59:44.897 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-10T09:59:44.897 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:59:44.897 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-10T09:59:44.898 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-portend-3.1.0-2.el9.noarch 38/102 2026-03-10T09:59:44.901 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-tempora-5.0.0-2.el9.noarch 39/102 2026-03-10T09:59:44.905 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-jaraco-functools-3.5.0-2.el9.noarch 40/102 2026-03-10T09:59:44.906 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-10T09:59:44.906 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-pycparser-2.20-6.el9.noarch 43/102 2026-03-10T09:59:44.908 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : openblas-0.3.29-1.el9.x86_64 51/102 2026-03-10T09:59:44.909 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-routes-2.5.1-5.el9.noarch 41/102 2026-03-10T09:59:44.910 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : flexiblas-3.0.4-9.el9.x86_64 52/102 2026-03-10T09:59:44.913 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-ply-3.11-14.el9.noarch 53/102 2026-03-10T09:59:44.914 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-cffi-1.14.5-5.el9.x86_64 42/102 2026-03-10T09:59:44.916 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-repoze-lru-0.7-16.el9.noarch 54/102 2026-03-10T09:59:44.918 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-jaraco-8.2.1-3.el9.noarch 55/102 2026-03-10T09:59:44.918 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-numpy-1:1.23.5-2.el9.x86_64 44/102 2026-03-10T09:59:44.920 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-more-itertools-8.12.0-2.el9.noarch 56/102 2026-03-10T09:59:44.921 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : flexiblas-netlib-3.0.4-9.el9.x86_64 45/102 2026-03-10T09:59:44.923 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-toml-0.10.2-6.el9.noarch 57/102 2026-03-10T09:59:44.926 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-pytz-2021.1-5.el9.noarch 58/102 2026-03-10T09:59:44.926 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 46/102 2026-03-10T09:59:44.929 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : openblas-openmp-0.3.29-1.el9.x86_64 47/102 2026-03-10T09:59:44.932 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libgfortran-11.5.0-14.el9.x86_64 48/102 2026-03-10T09:59:44.934 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-backports-tarfile-1.2.0-1.el9.noarch 59/102 2026-03-10T09:59:44.935 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 49/102 2026-03-10T09:59:44.938 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-devel-3.9.25-3.el9.x86_64 60/102 2026-03-10T09:59:44.940 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-jsonpointer-2.0-4.el9.noarch 61/102 2026-03-10T09:59:44.943 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-typing-extensions-4.15.0-1.el9.noarch 62/102 2026-03-10T09:59:44.946 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-idna-2.10-7.el9.1.noarch 63/102 2026-03-10T09:59:44.951 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-pysocks-1.7.1-12.el9.noarch 64/102 2026-03-10T09:59:44.955 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-pyasn1-0.4.8-7.el9.noarch 65/102 2026-03-10T09:59:44.958 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-10T09:59:44.958 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:59:44.958 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-10T09:59:44.958 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:59:44.959 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-10T09:59:44.960 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-logutils-0.3.5-21.el9.noarch 66/102 2026-03-10T09:59:44.965 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-pycparser-2.20-6.el9.noarch 43/102 2026-03-10T09:59:44.965 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-webob-1.8.8-2.el9.noarch 67/102 2026-03-10T09:59:44.969 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-10T09:59:44.970 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : openblas-0.3.29-1.el9.x86_64 51/102 2026-03-10T09:59:44.974 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : flexiblas-3.0.4-9.el9.x86_64 52/102 2026-03-10T09:59:44.976 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-cachetools-4.2.4-1.el9.noarch 68/102 2026-03-10T09:59:44.977 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-numpy-1:1.23.5-2.el9.x86_64 44/102 2026-03-10T09:59:44.977 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-ply-3.11-14.el9.noarch 53/102 2026-03-10T09:59:44.980 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : flexiblas-netlib-3.0.4-9.el9.x86_64 45/102 2026-03-10T09:59:44.980 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-repoze-lru-0.7-16.el9.noarch 54/102 2026-03-10T09:59:44.980 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-chardet-4.0.0-5.el9.noarch 69/102 2026-03-10T09:59:44.982 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-jaraco-8.2.1-3.el9.noarch 55/102 2026-03-10T09:59:44.984 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-autocommand-2.2.2-8.el9.noarch 70/102 2026-03-10T09:59:44.985 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-more-itertools-8.12.0-2.el9.noarch 56/102 2026-03-10T09:59:44.985 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 46/102 2026-03-10T09:59:44.986 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-packaging-20.9-5.el9.noarch 71/102 2026-03-10T09:59:44.987 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : openblas-openmp-0.3.29-1.el9.x86_64 47/102 2026-03-10T09:59:44.988 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-toml-0.10.2-6.el9.noarch 57/102 2026-03-10T09:59:44.991 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : libgfortran-11.5.0-14.el9.x86_64 48/102 2026-03-10T09:59:44.991 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-pytz-2021.1-5.el9.noarch 58/102 2026-03-10T09:59:44.993 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : grpc-data-1.46.7-10.el9.noarch 72/102 2026-03-10T09:59:44.994 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 49/102 2026-03-10T09:59:44.997 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-protobuf-3.14.0-17.el9.noarch 73/102 2026-03-10T09:59:44.999 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-backports-tarfile-1.2.0-1.el9.noarch 59/102 2026-03-10T09:59:45.001 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-zc-lockfile-2.0-10.el9.noarch 74/102 2026-03-10T09:59:45.004 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-devel-3.9.25-3.el9.x86_64 60/102 2026-03-10T09:59:45.006 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-jsonpointer-2.0-4.el9.noarch 61/102 2026-03-10T09:59:45.009 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-typing-extensions-4.15.0-1.el9.noarch 62/102 2026-03-10T09:59:45.009 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-natsort-7.1.1-5.el9.noarch 75/102 2026-03-10T09:59:45.011 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-idna-2.10-7.el9.1.noarch 63/102 2026-03-10T09:59:45.014 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-oauthlib-3.1.1-5.el9.noarch 76/102 2026-03-10T09:59:45.016 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-pysocks-1.7.1-12.el9.noarch 64/102 2026-03-10T09:59:45.016 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-10T09:59:45.016 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:59:45.016 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-10T09:59:45.016 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:59:45.017 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-10T09:59:45.017 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-websocket-client-1.2.3-2.el9.noarch 77/102 2026-03-10T09:59:45.020 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-certifi-2023.05.07-4.el9.noarch 78/102 2026-03-10T09:59:45.020 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-pyasn1-0.4.8-7.el9.noarch 65/102 2026-03-10T09:59:45.022 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 79/102 2026-03-10T09:59:45.025 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-10T09:59:45.025 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-logutils-0.3.5-21.el9.noarch 66/102 2026-03-10T09:59:45.027 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : openblas-0.3.29-1.el9.x86_64 51/102 2026-03-10T09:59:45.027 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 80/102 2026-03-10T09:59:45.029 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : flexiblas-3.0.4-9.el9.x86_64 52/102 2026-03-10T09:59:45.030 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-webob-1.8.8-2.el9.noarch 67/102 2026-03-10T09:59:45.031 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-werkzeug-2.0.3-3.el9.1.noarch 81/102 2026-03-10T09:59:45.032 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-ply-3.11-14.el9.noarch 53/102 2026-03-10T09:59:45.034 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-repoze-lru-0.7-16.el9.noarch 54/102 2026-03-10T09:59:45.036 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-cachetools-4.2.4-1.el9.noarch 68/102 2026-03-10T09:59:45.036 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-jaraco-8.2.1-3.el9.noarch 55/102 2026-03-10T09:59:45.039 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-more-itertools-8.12.0-2.el9.noarch 56/102 2026-03-10T09:59:45.040 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-chardet-4.0.0-5.el9.noarch 69/102 2026-03-10T09:59:45.042 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-toml-0.10.2-6.el9.noarch 57/102 2026-03-10T09:59:45.043 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-autocommand-2.2.2-8.el9.noarch 70/102 2026-03-10T09:59:45.045 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-pytz-2021.1-5.el9.noarch 58/102 2026-03-10T09:59:45.047 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-packaging-20.9-5.el9.noarch 71/102 2026-03-10T09:59:45.052 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-10T09:59:45.052 INFO:teuthology.orchestra.run.vm01.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-crash.service". 2026-03-10T09:59:45.052 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:59:45.052 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : grpc-data-1.46.7-10.el9.noarch 72/102 2026-03-10T09:59:45.053 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-backports-tarfile-1.2.0-1.el9.noarch 59/102 2026-03-10T09:59:45.056 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-protobuf-3.14.0-17.el9.noarch 73/102 2026-03-10T09:59:45.057 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-devel-3.9.25-3.el9.x86_64 60/102 2026-03-10T09:59:45.059 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-10T09:59:45.059 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-zc-lockfile-2.0-10.el9.noarch 74/102 2026-03-10T09:59:45.060 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-jsonpointer-2.0-4.el9.noarch 61/102 2026-03-10T09:59:45.063 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-typing-extensions-4.15.0-1.el9.noarch 62/102 2026-03-10T09:59:45.065 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-idna-2.10-7.el9.1.noarch 63/102 2026-03-10T09:59:45.067 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-natsort-7.1.1-5.el9.noarch 75/102 2026-03-10T09:59:45.070 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-pysocks-1.7.1-12.el9.noarch 64/102 2026-03-10T09:59:45.073 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-oauthlib-3.1.1-5.el9.noarch 76/102 2026-03-10T09:59:45.074 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-pyasn1-0.4.8-7.el9.noarch 65/102 2026-03-10T09:59:45.076 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-websocket-client-1.2.3-2.el9.noarch 77/102 2026-03-10T09:59:45.079 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-certifi-2023.05.07-4.el9.noarch 78/102 2026-03-10T09:59:45.080 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-logutils-0.3.5-21.el9.noarch 66/102 2026-03-10T09:59:45.081 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 79/102 2026-03-10T09:59:45.084 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-webob-1.8.8-2.el9.noarch 67/102 2026-03-10T09:59:45.087 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 80/102 2026-03-10T09:59:45.089 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-10T09:59:45.089 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 83/102 2026-03-10T09:59:45.090 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-cachetools-4.2.4-1.el9.noarch 68/102 2026-03-10T09:59:45.090 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-werkzeug-2.0.3-3.el9.1.noarch 81/102 2026-03-10T09:59:45.094 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-chardet-4.0.0-5.el9.noarch 69/102 2026-03-10T09:59:45.097 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-autocommand-2.2.2-8.el9.noarch 70/102 2026-03-10T09:59:45.100 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-packaging-20.9-5.el9.noarch 71/102 2026-03-10T09:59:45.102 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 83/102 2026-03-10T09:59:45.105 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : grpc-data-1.46.7-10.el9.noarch 72/102 2026-03-10T09:59:45.107 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : qatzip-libs-1.3.1-1.el9.x86_64 84/102 2026-03-10T09:59:45.110 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-protobuf-3.14.0-17.el9.noarch 73/102 2026-03-10T09:59:45.110 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 85/102 2026-03-10T09:59:45.110 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-10T09:59:45.110 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-crash.service". 2026-03-10T09:59:45.110 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:59:45.112 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-prettytable-0.7.2-27.el9.noarch 86/102 2026-03-10T09:59:45.112 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 87/102 2026-03-10T09:59:45.113 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-zc-lockfile-2.0-10.el9.noarch 74/102 2026-03-10T09:59:45.117 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-10T09:59:45.121 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-natsort-7.1.1-5.el9.noarch 75/102 2026-03-10T09:59:45.127 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-oauthlib-3.1.1-5.el9.noarch 76/102 2026-03-10T09:59:45.131 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-websocket-client-1.2.3-2.el9.noarch 77/102 2026-03-10T09:59:45.133 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-certifi-2023.05.07-4.el9.noarch 78/102 2026-03-10T09:59:45.135 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 79/102 2026-03-10T09:59:45.141 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 80/102 2026-03-10T09:59:45.145 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-werkzeug-2.0.3-3.el9.1.noarch 81/102 2026-03-10T09:59:45.146 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-10T09:59:45.146 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 83/102 2026-03-10T09:59:45.161 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 83/102 2026-03-10T09:59:45.166 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-10T09:59:45.167 INFO:teuthology.orchestra.run.vm02.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-crash.service". 2026-03-10T09:59:45.167 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:59:45.167 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : qatzip-libs-1.3.1-1.el9.x86_64 84/102 2026-03-10T09:59:45.170 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 85/102 2026-03-10T09:59:45.172 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-prettytable-0.7.2-27.el9.noarch 86/102 2026-03-10T09:59:45.172 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 87/102 2026-03-10T09:59:45.174 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-10T09:59:45.204 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-10T09:59:45.204 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 83/102 2026-03-10T09:59:45.216 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 83/102 2026-03-10T09:59:45.221 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : qatzip-libs-1.3.1-1.el9.x86_64 84/102 2026-03-10T09:59:45.224 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 85/102 2026-03-10T09:59:45.226 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-prettytable-0.7.2-27.el9.noarch 86/102 2026-03-10T09:59:45.226 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 87/102 2026-03-10T09:59:50.720 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 87/102 2026-03-10T09:59:50.720 INFO:teuthology.orchestra.run.vm01.stdout:skipping the directory /sys 2026-03-10T09:59:50.720 INFO:teuthology.orchestra.run.vm01.stdout:skipping the directory /proc 2026-03-10T09:59:50.720 INFO:teuthology.orchestra.run.vm01.stdout:skipping the directory /mnt 2026-03-10T09:59:50.720 INFO:teuthology.orchestra.run.vm01.stdout:skipping the directory /var/tmp 2026-03-10T09:59:50.720 INFO:teuthology.orchestra.run.vm01.stdout:skipping the directory /home 2026-03-10T09:59:50.720 INFO:teuthology.orchestra.run.vm01.stdout:skipping the directory /root 2026-03-10T09:59:50.720 INFO:teuthology.orchestra.run.vm01.stdout:skipping the directory /tmp 2026-03-10T09:59:50.720 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:59:50.728 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : qatlib-25.08.0-2.el9.x86_64 88/102 2026-03-10T09:59:50.747 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-10T09:59:50.747 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-10T09:59:50.756 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-10T09:59:50.758 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : gperftools-libs-2.9.1-3.el9.x86_64 90/102 2026-03-10T09:59:50.760 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : libunwind-1.6.2-1.el9.x86_64 91/102 2026-03-10T09:59:50.762 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : pciutils-3.7.0-7.el9.x86_64 92/102 2026-03-10T09:59:50.764 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : liboath-2.6.12-1.el9.x86_64 93/102 2026-03-10T09:59:50.765 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 94/102 2026-03-10T09:59:50.778 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 94/102 2026-03-10T09:59:50.780 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ledmon-libs-1.1.0-3.el9.x86_64 95/102 2026-03-10T09:59:50.783 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : libquadmath-11.5.0-14.el9.x86_64 96/102 2026-03-10T09:59:50.785 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-markupsafe-1.1.1-12.el9.x86_64 97/102 2026-03-10T09:59:50.788 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : protobuf-3.14.0-17.el9.x86_64 98/102 2026-03-10T09:59:50.793 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : libconfig-1.7.2-9.el9.x86_64 99/102 2026-03-10T09:59:50.800 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : cryptsetup-2.8.1-3.el9.x86_64 100/102 2026-03-10T09:59:50.805 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : abseil-cpp-20211102.0-4.el9.x86_64 101/102 2026-03-10T09:59:50.805 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-10T09:59:50.806 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 87/102 2026-03-10T09:59:50.807 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /sys 2026-03-10T09:59:50.807 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /proc 2026-03-10T09:59:50.807 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /mnt 2026-03-10T09:59:50.807 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /var/tmp 2026-03-10T09:59:50.807 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /home 2026-03-10T09:59:50.807 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /root 2026-03-10T09:59:50.807 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /tmp 2026-03-10T09:59:50.807 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:59:50.815 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : qatlib-25.08.0-2.el9.x86_64 88/102 2026-03-10T09:59:50.827 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 87/102 2026-03-10T09:59:50.827 INFO:teuthology.orchestra.run.vm02.stdout:skipping the directory /sys 2026-03-10T09:59:50.827 INFO:teuthology.orchestra.run.vm02.stdout:skipping the directory /proc 2026-03-10T09:59:50.827 INFO:teuthology.orchestra.run.vm02.stdout:skipping the directory /mnt 2026-03-10T09:59:50.827 INFO:teuthology.orchestra.run.vm02.stdout:skipping the directory /var/tmp 2026-03-10T09:59:50.827 INFO:teuthology.orchestra.run.vm02.stdout:skipping the directory /home 2026-03-10T09:59:50.827 INFO:teuthology.orchestra.run.vm02.stdout:skipping the directory /root 2026-03-10T09:59:50.827 INFO:teuthology.orchestra.run.vm02.stdout:skipping the directory /tmp 2026-03-10T09:59:50.827 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:59:50.833 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-10T09:59:50.833 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-10T09:59:50.836 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : qatlib-25.08.0-2.el9.x86_64 88/102 2026-03-10T09:59:50.841 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-10T09:59:50.844 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : gperftools-libs-2.9.1-3.el9.x86_64 90/102 2026-03-10T09:59:50.846 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libunwind-1.6.2-1.el9.x86_64 91/102 2026-03-10T09:59:50.848 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : pciutils-3.7.0-7.el9.x86_64 92/102 2026-03-10T09:59:50.850 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : liboath-2.6.12-1.el9.x86_64 93/102 2026-03-10T09:59:50.850 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 94/102 2026-03-10T09:59:50.852 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-10T09:59:50.853 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-10T09:59:50.862 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-10T09:59:50.864 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : gperftools-libs-2.9.1-3.el9.x86_64 90/102 2026-03-10T09:59:50.866 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 94/102 2026-03-10T09:59:50.866 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : libunwind-1.6.2-1.el9.x86_64 91/102 2026-03-10T09:59:50.868 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ledmon-libs-1.1.0-3.el9.x86_64 95/102 2026-03-10T09:59:50.869 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : pciutils-3.7.0-7.el9.x86_64 92/102 2026-03-10T09:59:50.870 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libquadmath-11.5.0-14.el9.x86_64 96/102 2026-03-10T09:59:50.871 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : liboath-2.6.12-1.el9.x86_64 93/102 2026-03-10T09:59:50.871 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 94/102 2026-03-10T09:59:50.873 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-markupsafe-1.1.1-12.el9.x86_64 97/102 2026-03-10T09:59:50.876 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : protobuf-3.14.0-17.el9.x86_64 98/102 2026-03-10T09:59:50.881 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libconfig-1.7.2-9.el9.x86_64 99/102 2026-03-10T09:59:50.885 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 94/102 2026-03-10T09:59:50.887 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ledmon-libs-1.1.0-3.el9.x86_64 95/102 2026-03-10T09:59:50.889 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : cryptsetup-2.8.1-3.el9.x86_64 100/102 2026-03-10T09:59:50.889 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : libquadmath-11.5.0-14.el9.x86_64 96/102 2026-03-10T09:59:50.892 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-markupsafe-1.1.1-12.el9.x86_64 97/102 2026-03-10T09:59:50.894 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : abseil-cpp-20211102.0-4.el9.x86_64 101/102 2026-03-10T09:59:50.894 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-10T09:59:50.895 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : protobuf-3.14.0-17.el9.x86_64 98/102 2026-03-10T09:59:50.900 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : libconfig-1.7.2-9.el9.x86_64 99/102 2026-03-10T09:59:50.901 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-10T09:59:50.901 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 1/102 2026-03-10T09:59:50.901 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T09:59:50.901 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/102 2026-03-10T09:59:50.901 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 4/102 2026-03-10T09:59:50.901 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/102 2026-03-10T09:59:50.901 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 6/102 2026-03-10T09:59:50.901 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-10T09:59:50.901 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 8/102 2026-03-10T09:59:50.902 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 9/102 2026-03-10T09:59:50.902 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 10/102 2026-03-10T09:59:50.902 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 11/102 2026-03-10T09:59:50.902 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T09:59:50.902 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 13/102 2026-03-10T09:59:50.902 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 14/102 2026-03-10T09:59:50.902 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 15/102 2026-03-10T09:59:50.902 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 16/102 2026-03-10T09:59:50.902 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 17/102 2026-03-10T09:59:50.902 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 18/102 2026-03-10T09:59:50.902 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 19/102 2026-03-10T09:59:50.902 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 20/102 2026-03-10T09:59:50.902 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 21/102 2026-03-10T09:59:50.902 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 22/102 2026-03-10T09:59:50.902 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 23/102 2026-03-10T09:59:50.902 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 24/102 2026-03-10T09:59:50.902 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 25/102 2026-03-10T09:59:50.902 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 26/102 2026-03-10T09:59:50.902 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 27/102 2026-03-10T09:59:50.902 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 28/102 2026-03-10T09:59:50.902 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 29/102 2026-03-10T09:59:50.902 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 30/102 2026-03-10T09:59:50.902 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 31/102 2026-03-10T09:59:50.902 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 32/102 2026-03-10T09:59:50.902 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 33/102 2026-03-10T09:59:50.902 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 34/102 2026-03-10T09:59:50.902 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 35/102 2026-03-10T09:59:50.902 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 36/102 2026-03-10T09:59:50.902 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 37/102 2026-03-10T09:59:50.902 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 38/102 2026-03-10T09:59:50.904 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 39/102 2026-03-10T09:59:50.904 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 40/102 2026-03-10T09:59:50.904 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 41/102 2026-03-10T09:59:50.904 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 42/102 2026-03-10T09:59:50.904 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 43/102 2026-03-10T09:59:50.904 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/102 2026-03-10T09:59:50.904 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-chardet-4.0.0-5.el9.noarch 45/102 2026-03-10T09:59:50.904 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 46/102 2026-03-10T09:59:50.904 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 47/102 2026-03-10T09:59:50.904 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 48/102 2026-03-10T09:59:50.904 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 49/102 2026-03-10T09:59:50.904 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 50/102 2026-03-10T09:59:50.904 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 51/102 2026-03-10T09:59:50.904 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 52/102 2026-03-10T09:59:50.904 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-idna-2.10-7.el9.1.noarch 53/102 2026-03-10T09:59:50.904 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 54/102 2026-03-10T09:59:50.904 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 55/102 2026-03-10T09:59:50.904 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 56/102 2026-03-10T09:59:50.904 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 57/102 2026-03-10T09:59:50.904 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 58/102 2026-03-10T09:59:50.904 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 59/102 2026-03-10T09:59:50.904 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 60/102 2026-03-10T09:59:50.904 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jsonpatch-1.21-16.el9.noarch 61/102 2026-03-10T09:59:50.904 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jsonpointer-2.0-4.el9.noarch 62/102 2026-03-10T09:59:50.904 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 63/102 2026-03-10T09:59:50.904 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 64/102 2026-03-10T09:59:50.904 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 65/102 2026-03-10T09:59:50.904 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 66/102 2026-03-10T09:59:50.904 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 67/102 2026-03-10T09:59:50.904 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 68/102 2026-03-10T09:59:50.904 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 69/102 2026-03-10T09:59:50.904 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 70/102 2026-03-10T09:59:50.904 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 71/102 2026-03-10T09:59:50.904 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-oauthlib-3.1.1-5.el9.noarch 72/102 2026-03-10T09:59:50.904 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 73/102 2026-03-10T09:59:50.904 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 74/102 2026-03-10T09:59:50.904 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-ply-3.11-14.el9.noarch 75/102 2026-03-10T09:59:50.904 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 76/102 2026-03-10T09:59:50.904 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-prettytable-0.7.2-27.el9.noarch 77/102 2026-03-10T09:59:50.904 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 78/102 2026-03-10T09:59:50.904 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 79/102 2026-03-10T09:59:50.904 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 80/102 2026-03-10T09:59:50.904 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 81/102 2026-03-10T09:59:50.904 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 82/102 2026-03-10T09:59:50.904 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-pysocks-1.7.1-12.el9.noarch 83/102 2026-03-10T09:59:50.905 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-pytz-2021.1-5.el9.noarch 84/102 2026-03-10T09:59:50.905 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 85/102 2026-03-10T09:59:50.905 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 86/102 2026-03-10T09:59:50.905 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 87/102 2026-03-10T09:59:50.905 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 88/102 2026-03-10T09:59:50.905 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 89/102 2026-03-10T09:59:50.905 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 90/102 2026-03-10T09:59:50.905 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 91/102 2026-03-10T09:59:50.905 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 92/102 2026-03-10T09:59:50.905 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 93/102 2026-03-10T09:59:50.905 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 94/102 2026-03-10T09:59:50.905 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 95/102 2026-03-10T09:59:50.905 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 96/102 2026-03-10T09:59:50.905 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 97/102 2026-03-10T09:59:50.905 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 98/102 2026-03-10T09:59:50.905 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 99/102 2026-03-10T09:59:50.905 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 100/102 2026-03-10T09:59:50.905 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 101/102 2026-03-10T09:59:50.908 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : cryptsetup-2.8.1-3.el9.x86_64 100/102 2026-03-10T09:59:50.913 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : abseil-cpp-20211102.0-4.el9.x86_64 101/102 2026-03-10T09:59:50.913 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-10T09:59:50.979 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout:Removed: 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: python3-chardet-4.0.0-5.el9.noarch 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-10T09:59:50.980 INFO:teuthology.orchestra.run.vm01.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: python3-idna-2.10-7.el9.1.noarch 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: python3-jsonpatch-1.21-16.el9.noarch 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: python3-jsonpointer-2.0-4.el9.noarch 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: python3-oauthlib-3.1.1-5.el9.noarch 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: python3-ply-3.11-14.el9.noarch 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: python3-prettytable-0.7.2-27.el9.noarch 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: python3-pysocks-1.7.1-12.el9.noarch 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: python3-pytz-2021.1-5.el9.noarch 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-10T09:59:50.981 INFO:teuthology.orchestra.run.vm01.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-10T09:59:50.982 INFO:teuthology.orchestra.run.vm01.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:50.982 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:59:50.982 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T09:59:51.000 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-10T09:59:51.000 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 1/102 2026-03-10T09:59:51.000 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T09:59:51.000 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/102 2026-03-10T09:59:51.000 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 4/102 2026-03-10T09:59:51.000 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/102 2026-03-10T09:59:51.001 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 6/102 2026-03-10T09:59:51.001 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-10T09:59:51.001 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 8/102 2026-03-10T09:59:51.001 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 9/102 2026-03-10T09:59:51.001 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 10/102 2026-03-10T09:59:51.001 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 11/102 2026-03-10T09:59:51.001 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T09:59:51.001 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 13/102 2026-03-10T09:59:51.002 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 14/102 2026-03-10T09:59:51.002 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 15/102 2026-03-10T09:59:51.002 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 16/102 2026-03-10T09:59:51.002 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 17/102 2026-03-10T09:59:51.002 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 18/102 2026-03-10T09:59:51.002 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 19/102 2026-03-10T09:59:51.002 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 20/102 2026-03-10T09:59:51.002 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 21/102 2026-03-10T09:59:51.002 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 22/102 2026-03-10T09:59:51.002 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 23/102 2026-03-10T09:59:51.002 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 24/102 2026-03-10T09:59:51.002 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 25/102 2026-03-10T09:59:51.002 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 26/102 2026-03-10T09:59:51.002 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 27/102 2026-03-10T09:59:51.002 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 28/102 2026-03-10T09:59:51.002 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 29/102 2026-03-10T09:59:51.002 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 30/102 2026-03-10T09:59:51.002 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 31/102 2026-03-10T09:59:51.002 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 32/102 2026-03-10T09:59:51.002 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 33/102 2026-03-10T09:59:51.002 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 34/102 2026-03-10T09:59:51.002 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 35/102 2026-03-10T09:59:51.002 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 36/102 2026-03-10T09:59:51.002 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 37/102 2026-03-10T09:59:51.002 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 38/102 2026-03-10T09:59:51.002 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 39/102 2026-03-10T09:59:51.002 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 40/102 2026-03-10T09:59:51.002 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 41/102 2026-03-10T09:59:51.002 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 42/102 2026-03-10T09:59:51.002 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 43/102 2026-03-10T09:59:51.002 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/102 2026-03-10T09:59:51.002 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-chardet-4.0.0-5.el9.noarch 45/102 2026-03-10T09:59:51.003 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 46/102 2026-03-10T09:59:51.003 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 47/102 2026-03-10T09:59:51.003 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 48/102 2026-03-10T09:59:51.003 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 49/102 2026-03-10T09:59:51.003 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 50/102 2026-03-10T09:59:51.003 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 51/102 2026-03-10T09:59:51.003 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 52/102 2026-03-10T09:59:51.003 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-idna-2.10-7.el9.1.noarch 53/102 2026-03-10T09:59:51.003 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 54/102 2026-03-10T09:59:51.003 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 55/102 2026-03-10T09:59:51.003 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 56/102 2026-03-10T09:59:51.003 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 57/102 2026-03-10T09:59:51.003 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 58/102 2026-03-10T09:59:51.003 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 59/102 2026-03-10T09:59:51.003 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 60/102 2026-03-10T09:59:51.003 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jsonpatch-1.21-16.el9.noarch 61/102 2026-03-10T09:59:51.003 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jsonpointer-2.0-4.el9.noarch 62/102 2026-03-10T09:59:51.003 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 63/102 2026-03-10T09:59:51.003 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 64/102 2026-03-10T09:59:51.003 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 65/102 2026-03-10T09:59:51.003 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 66/102 2026-03-10T09:59:51.003 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 67/102 2026-03-10T09:59:51.003 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 68/102 2026-03-10T09:59:51.003 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 69/102 2026-03-10T09:59:51.003 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 70/102 2026-03-10T09:59:51.003 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 71/102 2026-03-10T09:59:51.003 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-oauthlib-3.1.1-5.el9.noarch 72/102 2026-03-10T09:59:51.003 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 73/102 2026-03-10T09:59:51.003 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 74/102 2026-03-10T09:59:51.003 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-ply-3.11-14.el9.noarch 75/102 2026-03-10T09:59:51.003 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 76/102 2026-03-10T09:59:51.003 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-prettytable-0.7.2-27.el9.noarch 77/102 2026-03-10T09:59:51.003 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 78/102 2026-03-10T09:59:51.003 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 79/102 2026-03-10T09:59:51.003 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 80/102 2026-03-10T09:59:51.003 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 81/102 2026-03-10T09:59:51.004 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 82/102 2026-03-10T09:59:51.004 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pysocks-1.7.1-12.el9.noarch 83/102 2026-03-10T09:59:51.004 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pytz-2021.1-5.el9.noarch 84/102 2026-03-10T09:59:51.004 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 85/102 2026-03-10T09:59:51.004 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 86/102 2026-03-10T09:59:51.004 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 87/102 2026-03-10T09:59:51.004 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 88/102 2026-03-10T09:59:51.004 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 89/102 2026-03-10T09:59:51.004 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 90/102 2026-03-10T09:59:51.004 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 91/102 2026-03-10T09:59:51.004 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 92/102 2026-03-10T09:59:51.004 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 93/102 2026-03-10T09:59:51.004 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 94/102 2026-03-10T09:59:51.004 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 95/102 2026-03-10T09:59:51.004 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 96/102 2026-03-10T09:59:51.004 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 97/102 2026-03-10T09:59:51.004 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 98/102 2026-03-10T09:59:51.004 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 99/102 2026-03-10T09:59:51.004 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 100/102 2026-03-10T09:59:51.004 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 101/102 2026-03-10T09:59:51.011 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-10T09:59:51.011 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 1/102 2026-03-10T09:59:51.011 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T09:59:51.012 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/102 2026-03-10T09:59:51.012 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 4/102 2026-03-10T09:59:51.012 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/102 2026-03-10T09:59:51.012 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 6/102 2026-03-10T09:59:51.012 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-10T09:59:51.012 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 8/102 2026-03-10T09:59:51.012 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 9/102 2026-03-10T09:59:51.013 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 10/102 2026-03-10T09:59:51.013 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 11/102 2026-03-10T09:59:51.013 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T09:59:51.013 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 13/102 2026-03-10T09:59:51.013 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 14/102 2026-03-10T09:59:51.013 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 15/102 2026-03-10T09:59:51.013 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 16/102 2026-03-10T09:59:51.013 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 17/102 2026-03-10T09:59:51.013 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 18/102 2026-03-10T09:59:51.013 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 19/102 2026-03-10T09:59:51.013 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 20/102 2026-03-10T09:59:51.013 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 21/102 2026-03-10T09:59:51.013 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 22/102 2026-03-10T09:59:51.013 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 23/102 2026-03-10T09:59:51.013 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 24/102 2026-03-10T09:59:51.013 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 25/102 2026-03-10T09:59:51.013 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 26/102 2026-03-10T09:59:51.013 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 27/102 2026-03-10T09:59:51.013 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 28/102 2026-03-10T09:59:51.013 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 29/102 2026-03-10T09:59:51.013 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 30/102 2026-03-10T09:59:51.013 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 31/102 2026-03-10T09:59:51.013 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 32/102 2026-03-10T09:59:51.013 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 33/102 2026-03-10T09:59:51.013 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 34/102 2026-03-10T09:59:51.013 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 35/102 2026-03-10T09:59:51.013 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 36/102 2026-03-10T09:59:51.013 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 37/102 2026-03-10T09:59:51.013 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 38/102 2026-03-10T09:59:51.013 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 39/102 2026-03-10T09:59:51.013 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 40/102 2026-03-10T09:59:51.013 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 41/102 2026-03-10T09:59:51.013 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 42/102 2026-03-10T09:59:51.013 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 43/102 2026-03-10T09:59:51.014 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/102 2026-03-10T09:59:51.014 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-chardet-4.0.0-5.el9.noarch 45/102 2026-03-10T09:59:51.014 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 46/102 2026-03-10T09:59:51.014 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 47/102 2026-03-10T09:59:51.014 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 48/102 2026-03-10T09:59:51.014 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 49/102 2026-03-10T09:59:51.014 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 50/102 2026-03-10T09:59:51.014 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 51/102 2026-03-10T09:59:51.014 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 52/102 2026-03-10T09:59:51.014 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-idna-2.10-7.el9.1.noarch 53/102 2026-03-10T09:59:51.014 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 54/102 2026-03-10T09:59:51.014 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 55/102 2026-03-10T09:59:51.014 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 56/102 2026-03-10T09:59:51.014 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 57/102 2026-03-10T09:59:51.014 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 58/102 2026-03-10T09:59:51.014 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 59/102 2026-03-10T09:59:51.014 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 60/102 2026-03-10T09:59:51.014 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jsonpatch-1.21-16.el9.noarch 61/102 2026-03-10T09:59:51.014 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jsonpointer-2.0-4.el9.noarch 62/102 2026-03-10T09:59:51.015 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 63/102 2026-03-10T09:59:51.015 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 64/102 2026-03-10T09:59:51.015 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 65/102 2026-03-10T09:59:51.015 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 66/102 2026-03-10T09:59:51.015 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 67/102 2026-03-10T09:59:51.015 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 68/102 2026-03-10T09:59:51.015 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 69/102 2026-03-10T09:59:51.015 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 70/102 2026-03-10T09:59:51.015 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 71/102 2026-03-10T09:59:51.015 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-oauthlib-3.1.1-5.el9.noarch 72/102 2026-03-10T09:59:51.015 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 73/102 2026-03-10T09:59:51.015 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 74/102 2026-03-10T09:59:51.015 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-ply-3.11-14.el9.noarch 75/102 2026-03-10T09:59:51.015 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 76/102 2026-03-10T09:59:51.015 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-prettytable-0.7.2-27.el9.noarch 77/102 2026-03-10T09:59:51.015 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 78/102 2026-03-10T09:59:51.015 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 79/102 2026-03-10T09:59:51.015 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 80/102 2026-03-10T09:59:51.015 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 81/102 2026-03-10T09:59:51.015 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 82/102 2026-03-10T09:59:51.015 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-pysocks-1.7.1-12.el9.noarch 83/102 2026-03-10T09:59:51.015 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-pytz-2021.1-5.el9.noarch 84/102 2026-03-10T09:59:51.015 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 85/102 2026-03-10T09:59:51.015 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 86/102 2026-03-10T09:59:51.015 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 87/102 2026-03-10T09:59:51.015 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 88/102 2026-03-10T09:59:51.015 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 89/102 2026-03-10T09:59:51.015 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 90/102 2026-03-10T09:59:51.015 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 91/102 2026-03-10T09:59:51.015 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 92/102 2026-03-10T09:59:51.015 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 93/102 2026-03-10T09:59:51.015 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 94/102 2026-03-10T09:59:51.015 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 95/102 2026-03-10T09:59:51.015 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 96/102 2026-03-10T09:59:51.015 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 97/102 2026-03-10T09:59:51.015 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 98/102 2026-03-10T09:59:51.015 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 99/102 2026-03-10T09:59:51.015 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 100/102 2026-03-10T09:59:51.015 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 101/102 2026-03-10T09:59:51.083 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-10T09:59:51.083 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:59:51.083 INFO:teuthology.orchestra.run.vm08.stdout:Removed: 2026-03-10T09:59:51.083 INFO:teuthology.orchestra.run.vm08.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-10T09:59:51.083 INFO:teuthology.orchestra.run.vm08.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:51.083 INFO:teuthology.orchestra.run.vm08.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:51.083 INFO:teuthology.orchestra.run.vm08.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:59:51.083 INFO:teuthology.orchestra.run.vm08.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:51.083 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:51.083 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:59:51.083 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:59:51.083 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:59:51.083 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:59:51.083 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:59:51.083 INFO:teuthology.orchestra.run.vm08.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:51.083 INFO:teuthology.orchestra.run.vm08.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:59:51.083 INFO:teuthology.orchestra.run.vm08.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:51.083 INFO:teuthology.orchestra.run.vm08.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:59:51.083 INFO:teuthology.orchestra.run.vm08.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-10T09:59:51.083 INFO:teuthology.orchestra.run.vm08.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-10T09:59:51.083 INFO:teuthology.orchestra.run.vm08.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-10T09:59:51.083 INFO:teuthology.orchestra.run.vm08.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-10T09:59:51.083 INFO:teuthology.orchestra.run.vm08.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-10T09:59:51.083 INFO:teuthology.orchestra.run.vm08.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-10T09:59:51.083 INFO:teuthology.orchestra.run.vm08.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-10T09:59:51.083 INFO:teuthology.orchestra.run.vm08.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:51.083 INFO:teuthology.orchestra.run.vm08.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-10T09:59:51.083 INFO:teuthology.orchestra.run.vm08.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-10T09:59:51.083 INFO:teuthology.orchestra.run.vm08.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-chardet-4.0.0-5.el9.noarch 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-idna-2.10-7.el9.1.noarch 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-jsonpatch-1.21-16.el9.noarch 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-jsonpointer-2.0-4.el9.noarch 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-oauthlib-3.1.1-5.el9.noarch 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-ply-3.11-14.el9.noarch 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-prettytable-0.7.2-27.el9.noarch 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-pysocks-1.7.1-12.el9.noarch 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-pytz-2021.1-5.el9.noarch 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-10T09:59:51.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-10T09:59:51.085 INFO:teuthology.orchestra.run.vm08.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-10T09:59:51.085 INFO:teuthology.orchestra.run.vm08.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-10T09:59:51.085 INFO:teuthology.orchestra.run.vm08.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-10T09:59:51.085 INFO:teuthology.orchestra.run.vm08.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-10T09:59:51.085 INFO:teuthology.orchestra.run.vm08.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-10T09:59:51.085 INFO:teuthology.orchestra.run.vm08.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-10T09:59:51.085 INFO:teuthology.orchestra.run.vm08.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-10T09:59:51.085 INFO:teuthology.orchestra.run.vm08.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-10T09:59:51.085 INFO:teuthology.orchestra.run.vm08.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-10T09:59:51.085 INFO:teuthology.orchestra.run.vm08.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-10T09:59:51.085 INFO:teuthology.orchestra.run.vm08.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-10T09:59:51.085 INFO:teuthology.orchestra.run.vm08.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:51.085 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:59:51.085 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T09:59:51.092 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout:Removed: 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: python3-chardet-4.0.0-5.el9.noarch 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-10T09:59:51.093 INFO:teuthology.orchestra.run.vm02.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: python3-idna-2.10-7.el9.1.noarch 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: python3-jsonpatch-1.21-16.el9.noarch 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: python3-jsonpointer-2.0-4.el9.noarch 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: python3-oauthlib-3.1.1-5.el9.noarch 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: python3-ply-3.11-14.el9.noarch 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: python3-prettytable-0.7.2-27.el9.noarch 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: python3-pysocks-1.7.1-12.el9.noarch 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: python3-pytz-2021.1-5.el9.noarch 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-10T09:59:51.094 INFO:teuthology.orchestra.run.vm02.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-10T09:59:51.095 INFO:teuthology.orchestra.run.vm02.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:51.095 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:59:51.095 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T09:59:51.191 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T09:59:51.192 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T09:59:51.192 INFO:teuthology.orchestra.run.vm01.stdout: Package Arch Version Repository Size 2026-03-10T09:59:51.192 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T09:59:51.192 INFO:teuthology.orchestra.run.vm01.stdout:Removing: 2026-03-10T09:59:51.192 INFO:teuthology.orchestra.run.vm01.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 775 k 2026-03-10T09:59:51.192 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:59:51.192 INFO:teuthology.orchestra.run.vm01.stdout:Transaction Summary 2026-03-10T09:59:51.192 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T09:59:51.192 INFO:teuthology.orchestra.run.vm01.stdout:Remove 1 Package 2026-03-10T09:59:51.192 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:59:51.192 INFO:teuthology.orchestra.run.vm01.stdout:Freed space: 775 k 2026-03-10T09:59:51.192 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction check 2026-03-10T09:59:51.194 INFO:teuthology.orchestra.run.vm01.stdout:Transaction check succeeded. 2026-03-10T09:59:51.194 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction test 2026-03-10T09:59:51.195 INFO:teuthology.orchestra.run.vm01.stdout:Transaction test succeeded. 2026-03-10T09:59:51.195 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction 2026-03-10T09:59:51.210 INFO:teuthology.orchestra.run.vm01.stdout: Preparing : 1/1 2026-03-10T09:59:51.210 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T09:59:51.291 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T09:59:51.292 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T09:59:51.292 INFO:teuthology.orchestra.run.vm08.stdout: Package Arch Version Repository Size 2026-03-10T09:59:51.292 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T09:59:51.292 INFO:teuthology.orchestra.run.vm08.stdout:Removing: 2026-03-10T09:59:51.292 INFO:teuthology.orchestra.run.vm08.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 775 k 2026-03-10T09:59:51.292 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:59:51.292 INFO:teuthology.orchestra.run.vm08.stdout:Transaction Summary 2026-03-10T09:59:51.292 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T09:59:51.292 INFO:teuthology.orchestra.run.vm08.stdout:Remove 1 Package 2026-03-10T09:59:51.292 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:59:51.292 INFO:teuthology.orchestra.run.vm08.stdout:Freed space: 775 k 2026-03-10T09:59:51.292 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction check 2026-03-10T09:59:51.293 INFO:teuthology.orchestra.run.vm08.stdout:Transaction check succeeded. 2026-03-10T09:59:51.293 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction test 2026-03-10T09:59:51.295 INFO:teuthology.orchestra.run.vm08.stdout:Transaction test succeeded. 2026-03-10T09:59:51.295 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction 2026-03-10T09:59:51.309 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T09:59:51.309 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T09:59:51.309 INFO:teuthology.orchestra.run.vm02.stdout: Package Arch Version Repository Size 2026-03-10T09:59:51.309 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T09:59:51.309 INFO:teuthology.orchestra.run.vm02.stdout:Removing: 2026-03-10T09:59:51.309 INFO:teuthology.orchestra.run.vm02.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 775 k 2026-03-10T09:59:51.309 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:59:51.309 INFO:teuthology.orchestra.run.vm02.stdout:Transaction Summary 2026-03-10T09:59:51.309 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T09:59:51.309 INFO:teuthology.orchestra.run.vm02.stdout:Remove 1 Package 2026-03-10T09:59:51.309 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:59:51.309 INFO:teuthology.orchestra.run.vm02.stdout:Freed space: 775 k 2026-03-10T09:59:51.309 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction check 2026-03-10T09:59:51.310 INFO:teuthology.orchestra.run.vm08.stdout: Preparing : 1/1 2026-03-10T09:59:51.310 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T09:59:51.311 INFO:teuthology.orchestra.run.vm02.stdout:Transaction check succeeded. 2026-03-10T09:59:51.311 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction test 2026-03-10T09:59:51.312 INFO:teuthology.orchestra.run.vm02.stdout:Transaction test succeeded. 2026-03-10T09:59:51.312 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction 2026-03-10T09:59:51.313 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T09:59:51.328 INFO:teuthology.orchestra.run.vm02.stdout: Preparing : 1/1 2026-03-10T09:59:51.329 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T09:59:51.360 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T09:59:51.360 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:59:51.361 INFO:teuthology.orchestra.run.vm01.stdout:Removed: 2026-03-10T09:59:51.361 INFO:teuthology.orchestra.run.vm01.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:59:51.361 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:59:51.361 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T09:59:51.428 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T09:59:51.457 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T09:59:51.468 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T09:59:51.469 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:59:51.469 INFO:teuthology.orchestra.run.vm08.stdout:Removed: 2026-03-10T09:59:51.469 INFO:teuthology.orchestra.run.vm08.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:59:51.469 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:59:51.469 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T09:59:51.544 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T09:59:51.545 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:59:51.545 INFO:teuthology.orchestra.run.vm02.stdout:Removed: 2026-03-10T09:59:51.545 INFO:teuthology.orchestra.run.vm02.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:59:51.545 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:59:51.545 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T09:59:51.545 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: ceph-immutable-object-cache 2026-03-10T09:59:51.545 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T09:59:51.549 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T09:59:51.549 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T09:59:51.550 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T09:59:51.694 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: ceph-immutable-object-cache 2026-03-10T09:59:51.694 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T09:59:51.698 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T09:59:51.698 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T09:59:51.698 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T09:59:51.782 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: ceph-mgr 2026-03-10T09:59:51.782 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T09:59:51.786 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T09:59:51.786 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T09:59:51.786 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T09:59:51.828 INFO:teuthology.orchestra.run.vm02.stdout:No match for argument: ceph-immutable-object-cache 2026-03-10T09:59:51.828 INFO:teuthology.orchestra.run.vm02.stderr:No packages marked for removal. 2026-03-10T09:59:51.831 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T09:59:51.832 INFO:teuthology.orchestra.run.vm02.stdout:Nothing to do. 2026-03-10T09:59:51.832 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T09:59:51.882 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: ceph-mgr 2026-03-10T09:59:51.882 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T09:59:51.885 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T09:59:51.886 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T09:59:51.886 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T09:59:51.954 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: ceph-mgr-dashboard 2026-03-10T09:59:51.954 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T09:59:51.958 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T09:59:51.958 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T09:59:51.958 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T09:59:52.015 INFO:teuthology.orchestra.run.vm02.stdout:No match for argument: ceph-mgr 2026-03-10T09:59:52.015 INFO:teuthology.orchestra.run.vm02.stderr:No packages marked for removal. 2026-03-10T09:59:52.019 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T09:59:52.019 INFO:teuthology.orchestra.run.vm02.stdout:Nothing to do. 2026-03-10T09:59:52.019 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T09:59:52.064 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: ceph-mgr-dashboard 2026-03-10T09:59:52.064 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T09:59:52.067 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T09:59:52.068 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T09:59:52.068 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T09:59:52.137 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: ceph-mgr-diskprediction-local 2026-03-10T09:59:52.137 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T09:59:52.140 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T09:59:52.141 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T09:59:52.141 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T09:59:52.197 INFO:teuthology.orchestra.run.vm02.stdout:No match for argument: ceph-mgr-dashboard 2026-03-10T09:59:52.197 INFO:teuthology.orchestra.run.vm02.stderr:No packages marked for removal. 2026-03-10T09:59:52.200 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T09:59:52.201 INFO:teuthology.orchestra.run.vm02.stdout:Nothing to do. 2026-03-10T09:59:52.201 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T09:59:52.240 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: ceph-mgr-diskprediction-local 2026-03-10T09:59:52.240 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T09:59:52.243 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T09:59:52.244 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T09:59:52.244 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T09:59:52.323 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: ceph-mgr-rook 2026-03-10T09:59:52.323 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T09:59:52.327 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T09:59:52.327 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T09:59:52.327 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T09:59:52.376 INFO:teuthology.orchestra.run.vm02.stdout:No match for argument: ceph-mgr-diskprediction-local 2026-03-10T09:59:52.376 INFO:teuthology.orchestra.run.vm02.stderr:No packages marked for removal. 2026-03-10T09:59:52.379 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T09:59:52.380 INFO:teuthology.orchestra.run.vm02.stdout:Nothing to do. 2026-03-10T09:59:52.380 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T09:59:52.425 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: ceph-mgr-rook 2026-03-10T09:59:52.425 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T09:59:52.428 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T09:59:52.429 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T09:59:52.429 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T09:59:52.512 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: ceph-mgr-cephadm 2026-03-10T09:59:52.512 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T09:59:52.516 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T09:59:52.516 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T09:59:52.516 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T09:59:52.561 INFO:teuthology.orchestra.run.vm02.stdout:No match for argument: ceph-mgr-rook 2026-03-10T09:59:52.561 INFO:teuthology.orchestra.run.vm02.stderr:No packages marked for removal. 2026-03-10T09:59:52.564 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T09:59:52.565 INFO:teuthology.orchestra.run.vm02.stdout:Nothing to do. 2026-03-10T09:59:52.565 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T09:59:52.618 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: ceph-mgr-cephadm 2026-03-10T09:59:52.618 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T09:59:52.621 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T09:59:52.622 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T09:59:52.622 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T09:59:52.704 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T09:59:52.704 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T09:59:52.704 INFO:teuthology.orchestra.run.vm01.stdout: Package Arch Version Repository Size 2026-03-10T09:59:52.704 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T09:59:52.704 INFO:teuthology.orchestra.run.vm01.stdout:Removing: 2026-03-10T09:59:52.704 INFO:teuthology.orchestra.run.vm01.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.6 M 2026-03-10T09:59:52.704 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:59:52.704 INFO:teuthology.orchestra.run.vm01.stdout:Transaction Summary 2026-03-10T09:59:52.704 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T09:59:52.704 INFO:teuthology.orchestra.run.vm01.stdout:Remove 1 Package 2026-03-10T09:59:52.704 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:59:52.704 INFO:teuthology.orchestra.run.vm01.stdout:Freed space: 3.6 M 2026-03-10T09:59:52.704 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction check 2026-03-10T09:59:52.706 INFO:teuthology.orchestra.run.vm01.stdout:Transaction check succeeded. 2026-03-10T09:59:52.706 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction test 2026-03-10T09:59:52.717 INFO:teuthology.orchestra.run.vm01.stdout:Transaction test succeeded. 2026-03-10T09:59:52.718 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction 2026-03-10T09:59:52.744 INFO:teuthology.orchestra.run.vm01.stdout: Preparing : 1/1 2026-03-10T09:59:52.748 INFO:teuthology.orchestra.run.vm02.stdout:No match for argument: ceph-mgr-cephadm 2026-03-10T09:59:52.748 INFO:teuthology.orchestra.run.vm02.stderr:No packages marked for removal. 2026-03-10T09:59:52.751 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T09:59:52.752 INFO:teuthology.orchestra.run.vm02.stdout:Nothing to do. 2026-03-10T09:59:52.752 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T09:59:52.758 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T09:59:52.807 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T09:59:52.808 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T09:59:52.808 INFO:teuthology.orchestra.run.vm08.stdout: Package Arch Version Repository Size 2026-03-10T09:59:52.808 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T09:59:52.808 INFO:teuthology.orchestra.run.vm08.stdout:Removing: 2026-03-10T09:59:52.808 INFO:teuthology.orchestra.run.vm08.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.6 M 2026-03-10T09:59:52.808 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:59:52.808 INFO:teuthology.orchestra.run.vm08.stdout:Transaction Summary 2026-03-10T09:59:52.808 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T09:59:52.808 INFO:teuthology.orchestra.run.vm08.stdout:Remove 1 Package 2026-03-10T09:59:52.808 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:59:52.808 INFO:teuthology.orchestra.run.vm08.stdout:Freed space: 3.6 M 2026-03-10T09:59:52.808 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction check 2026-03-10T09:59:52.809 INFO:teuthology.orchestra.run.vm08.stdout:Transaction check succeeded. 2026-03-10T09:59:52.809 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction test 2026-03-10T09:59:52.818 INFO:teuthology.orchestra.run.vm08.stdout:Transaction test succeeded. 2026-03-10T09:59:52.818 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction 2026-03-10T09:59:52.820 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T09:59:52.842 INFO:teuthology.orchestra.run.vm08.stdout: Preparing : 1/1 2026-03-10T09:59:52.858 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T09:59:52.868 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T09:59:52.868 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:59:52.868 INFO:teuthology.orchestra.run.vm01.stdout:Removed: 2026-03-10T09:59:52.868 INFO:teuthology.orchestra.run.vm01.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:52.868 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:59:52.868 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T09:59:52.924 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T09:59:52.943 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T09:59:52.946 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T09:59:52.947 INFO:teuthology.orchestra.run.vm02.stdout: Package Arch Version Repository Size 2026-03-10T09:59:52.947 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T09:59:52.947 INFO:teuthology.orchestra.run.vm02.stdout:Removing: 2026-03-10T09:59:52.947 INFO:teuthology.orchestra.run.vm02.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.6 M 2026-03-10T09:59:52.947 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:59:52.947 INFO:teuthology.orchestra.run.vm02.stdout:Transaction Summary 2026-03-10T09:59:52.947 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T09:59:52.947 INFO:teuthology.orchestra.run.vm02.stdout:Remove 1 Package 2026-03-10T09:59:52.947 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:59:52.947 INFO:teuthology.orchestra.run.vm02.stdout:Freed space: 3.6 M 2026-03-10T09:59:52.947 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction check 2026-03-10T09:59:52.947 INFO:teuthology.orchestra.run.vm02.stdout:Transaction check succeeded. 2026-03-10T09:59:52.947 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction test 2026-03-10T09:59:52.956 INFO:teuthology.orchestra.run.vm02.stdout:Transaction test succeeded. 2026-03-10T09:59:52.956 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction 2026-03-10T09:59:52.969 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T09:59:52.969 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:59:52.969 INFO:teuthology.orchestra.run.vm08.stdout:Removed: 2026-03-10T09:59:52.969 INFO:teuthology.orchestra.run.vm08.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:52.969 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:59:52.969 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T09:59:52.981 INFO:teuthology.orchestra.run.vm02.stdout: Preparing : 1/1 2026-03-10T09:59:52.996 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T09:59:53.061 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T09:59:53.081 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: ceph-volume 2026-03-10T09:59:53.082 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T09:59:53.085 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T09:59:53.086 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T09:59:53.086 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T09:59:53.101 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T09:59:53.101 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:59:53.101 INFO:teuthology.orchestra.run.vm02.stdout:Removed: 2026-03-10T09:59:53.101 INFO:teuthology.orchestra.run.vm02.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:53.101 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:59:53.101 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T09:59:53.156 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: ceph-volume 2026-03-10T09:59:53.156 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T09:59:53.159 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T09:59:53.160 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T09:59:53.160 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T09:59:53.271 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T09:59:53.271 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T09:59:53.271 INFO:teuthology.orchestra.run.vm01.stdout: Package Arch Version Repo Size 2026-03-10T09:59:53.271 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T09:59:53.271 INFO:teuthology.orchestra.run.vm01.stdout:Removing: 2026-03-10T09:59:53.271 INFO:teuthology.orchestra.run.vm01.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 456 k 2026-03-10T09:59:53.271 INFO:teuthology.orchestra.run.vm01.stdout:Removing dependent packages: 2026-03-10T09:59:53.271 INFO:teuthology.orchestra.run.vm01.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 153 k 2026-03-10T09:59:53.271 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:59:53.271 INFO:teuthology.orchestra.run.vm01.stdout:Transaction Summary 2026-03-10T09:59:53.271 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T09:59:53.271 INFO:teuthology.orchestra.run.vm01.stdout:Remove 2 Packages 2026-03-10T09:59:53.271 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:59:53.271 INFO:teuthology.orchestra.run.vm01.stdout:Freed space: 610 k 2026-03-10T09:59:53.271 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction check 2026-03-10T09:59:53.273 INFO:teuthology.orchestra.run.vm01.stdout:Transaction check succeeded. 2026-03-10T09:59:53.273 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction test 2026-03-10T09:59:53.283 INFO:teuthology.orchestra.run.vm01.stdout:Transaction test succeeded. 2026-03-10T09:59:53.283 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction 2026-03-10T09:59:53.286 INFO:teuthology.orchestra.run.vm02.stdout:No match for argument: ceph-volume 2026-03-10T09:59:53.286 INFO:teuthology.orchestra.run.vm02.stderr:No packages marked for removal. 2026-03-10T09:59:53.289 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T09:59:53.290 INFO:teuthology.orchestra.run.vm02.stdout:Nothing to do. 2026-03-10T09:59:53.290 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T09:59:53.308 INFO:teuthology.orchestra.run.vm01.stdout: Preparing : 1/1 2026-03-10T09:59:53.310 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T09:59:53.324 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T09:59:53.338 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T09:59:53.338 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T09:59:53.338 INFO:teuthology.orchestra.run.vm08.stdout: Package Arch Version Repo Size 2026-03-10T09:59:53.338 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T09:59:53.338 INFO:teuthology.orchestra.run.vm08.stdout:Removing: 2026-03-10T09:59:53.338 INFO:teuthology.orchestra.run.vm08.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 456 k 2026-03-10T09:59:53.338 INFO:teuthology.orchestra.run.vm08.stdout:Removing dependent packages: 2026-03-10T09:59:53.338 INFO:teuthology.orchestra.run.vm08.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 153 k 2026-03-10T09:59:53.338 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:59:53.338 INFO:teuthology.orchestra.run.vm08.stdout:Transaction Summary 2026-03-10T09:59:53.338 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T09:59:53.338 INFO:teuthology.orchestra.run.vm08.stdout:Remove 2 Packages 2026-03-10T09:59:53.338 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:59:53.338 INFO:teuthology.orchestra.run.vm08.stdout:Freed space: 610 k 2026-03-10T09:59:53.338 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction check 2026-03-10T09:59:53.340 INFO:teuthology.orchestra.run.vm08.stdout:Transaction check succeeded. 2026-03-10T09:59:53.340 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction test 2026-03-10T09:59:53.350 INFO:teuthology.orchestra.run.vm08.stdout:Transaction test succeeded. 2026-03-10T09:59:53.350 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction 2026-03-10T09:59:53.374 INFO:teuthology.orchestra.run.vm08.stdout: Preparing : 1/1 2026-03-10T09:59:53.376 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T09:59:53.378 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T09:59:53.378 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T09:59:53.390 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T09:59:53.419 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T09:59:53.419 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:59:53.419 INFO:teuthology.orchestra.run.vm01.stdout:Removed: 2026-03-10T09:59:53.419 INFO:teuthology.orchestra.run.vm01.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:53.419 INFO:teuthology.orchestra.run.vm01.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:53.419 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:59:53.419 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T09:59:53.452 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T09:59:53.452 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T09:59:53.470 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T09:59:53.471 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T09:59:53.471 INFO:teuthology.orchestra.run.vm02.stdout: Package Arch Version Repo Size 2026-03-10T09:59:53.471 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T09:59:53.471 INFO:teuthology.orchestra.run.vm02.stdout:Removing: 2026-03-10T09:59:53.471 INFO:teuthology.orchestra.run.vm02.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 456 k 2026-03-10T09:59:53.471 INFO:teuthology.orchestra.run.vm02.stdout:Removing dependent packages: 2026-03-10T09:59:53.471 INFO:teuthology.orchestra.run.vm02.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 153 k 2026-03-10T09:59:53.471 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:59:53.471 INFO:teuthology.orchestra.run.vm02.stdout:Transaction Summary 2026-03-10T09:59:53.471 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T09:59:53.471 INFO:teuthology.orchestra.run.vm02.stdout:Remove 2 Packages 2026-03-10T09:59:53.471 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:59:53.471 INFO:teuthology.orchestra.run.vm02.stdout:Freed space: 610 k 2026-03-10T09:59:53.471 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction check 2026-03-10T09:59:53.473 INFO:teuthology.orchestra.run.vm02.stdout:Transaction check succeeded. 2026-03-10T09:59:53.473 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction test 2026-03-10T09:59:53.484 INFO:teuthology.orchestra.run.vm02.stdout:Transaction test succeeded. 2026-03-10T09:59:53.484 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction 2026-03-10T09:59:53.498 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T09:59:53.498 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:59:53.498 INFO:teuthology.orchestra.run.vm08.stdout:Removed: 2026-03-10T09:59:53.498 INFO:teuthology.orchestra.run.vm08.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:53.499 INFO:teuthology.orchestra.run.vm08.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:53.499 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:59:53.499 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T09:59:53.509 INFO:teuthology.orchestra.run.vm02.stdout: Preparing : 1/1 2026-03-10T09:59:53.511 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T09:59:53.525 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T09:59:53.582 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T09:59:53.582 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T09:59:53.614 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T09:59:53.615 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T09:59:53.615 INFO:teuthology.orchestra.run.vm01.stdout: Package Arch Version Repo Size 2026-03-10T09:59:53.615 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T09:59:53.615 INFO:teuthology.orchestra.run.vm01.stdout:Removing: 2026-03-10T09:59:53.615 INFO:teuthology.orchestra.run.vm01.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.0 M 2026-03-10T09:59:53.615 INFO:teuthology.orchestra.run.vm01.stdout:Removing dependent packages: 2026-03-10T09:59:53.615 INFO:teuthology.orchestra.run.vm01.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 514 k 2026-03-10T09:59:53.615 INFO:teuthology.orchestra.run.vm01.stdout:Removing unused dependencies: 2026-03-10T09:59:53.615 INFO:teuthology.orchestra.run.vm01.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 187 k 2026-03-10T09:59:53.615 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:59:53.615 INFO:teuthology.orchestra.run.vm01.stdout:Transaction Summary 2026-03-10T09:59:53.615 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T09:59:53.615 INFO:teuthology.orchestra.run.vm01.stdout:Remove 3 Packages 2026-03-10T09:59:53.615 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:59:53.615 INFO:teuthology.orchestra.run.vm01.stdout:Freed space: 3.7 M 2026-03-10T09:59:53.615 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction check 2026-03-10T09:59:53.617 INFO:teuthology.orchestra.run.vm01.stdout:Transaction check succeeded. 2026-03-10T09:59:53.617 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction test 2026-03-10T09:59:53.625 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T09:59:53.625 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:59:53.625 INFO:teuthology.orchestra.run.vm02.stdout:Removed: 2026-03-10T09:59:53.625 INFO:teuthology.orchestra.run.vm02.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:53.625 INFO:teuthology.orchestra.run.vm02.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:53.625 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:59:53.625 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T09:59:53.633 INFO:teuthology.orchestra.run.vm01.stdout:Transaction test succeeded. 2026-03-10T09:59:53.633 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction 2026-03-10T09:59:53.665 INFO:teuthology.orchestra.run.vm01.stdout: Preparing : 1/1 2026-03-10T09:59:53.667 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-10T09:59:53.668 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-10T09:59:53.668 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T09:59:53.697 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T09:59:53.697 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T09:59:53.697 INFO:teuthology.orchestra.run.vm08.stdout: Package Arch Version Repo Size 2026-03-10T09:59:53.697 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T09:59:53.697 INFO:teuthology.orchestra.run.vm08.stdout:Removing: 2026-03-10T09:59:53.697 INFO:teuthology.orchestra.run.vm08.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.0 M 2026-03-10T09:59:53.697 INFO:teuthology.orchestra.run.vm08.stdout:Removing dependent packages: 2026-03-10T09:59:53.698 INFO:teuthology.orchestra.run.vm08.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 514 k 2026-03-10T09:59:53.698 INFO:teuthology.orchestra.run.vm08.stdout:Removing unused dependencies: 2026-03-10T09:59:53.698 INFO:teuthology.orchestra.run.vm08.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 187 k 2026-03-10T09:59:53.698 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:59:53.698 INFO:teuthology.orchestra.run.vm08.stdout:Transaction Summary 2026-03-10T09:59:53.698 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T09:59:53.698 INFO:teuthology.orchestra.run.vm08.stdout:Remove 3 Packages 2026-03-10T09:59:53.698 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:59:53.698 INFO:teuthology.orchestra.run.vm08.stdout:Freed space: 3.7 M 2026-03-10T09:59:53.698 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction check 2026-03-10T09:59:53.700 INFO:teuthology.orchestra.run.vm08.stdout:Transaction check succeeded. 2026-03-10T09:59:53.700 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction test 2026-03-10T09:59:53.717 INFO:teuthology.orchestra.run.vm08.stdout:Transaction test succeeded. 2026-03-10T09:59:53.718 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction 2026-03-10T09:59:53.730 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T09:59:53.730 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-10T09:59:53.730 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-10T09:59:53.753 INFO:teuthology.orchestra.run.vm08.stdout: Preparing : 1/1 2026-03-10T09:59:53.755 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-10T09:59:53.756 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-10T09:59:53.756 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T09:59:53.766 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T09:59:53.766 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:59:53.766 INFO:teuthology.orchestra.run.vm01.stdout:Removed: 2026-03-10T09:59:53.766 INFO:teuthology.orchestra.run.vm01.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:53.766 INFO:teuthology.orchestra.run.vm01.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:53.766 INFO:teuthology.orchestra.run.vm01.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:53.766 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:59:53.766 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T09:59:53.816 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T09:59:53.816 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T09:59:53.816 INFO:teuthology.orchestra.run.vm02.stdout: Package Arch Version Repo Size 2026-03-10T09:59:53.817 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T09:59:53.817 INFO:teuthology.orchestra.run.vm02.stdout:Removing: 2026-03-10T09:59:53.817 INFO:teuthology.orchestra.run.vm02.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.0 M 2026-03-10T09:59:53.817 INFO:teuthology.orchestra.run.vm02.stdout:Removing dependent packages: 2026-03-10T09:59:53.817 INFO:teuthology.orchestra.run.vm02.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 514 k 2026-03-10T09:59:53.817 INFO:teuthology.orchestra.run.vm02.stdout:Removing unused dependencies: 2026-03-10T09:59:53.817 INFO:teuthology.orchestra.run.vm02.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 187 k 2026-03-10T09:59:53.817 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:59:53.817 INFO:teuthology.orchestra.run.vm02.stdout:Transaction Summary 2026-03-10T09:59:53.817 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T09:59:53.817 INFO:teuthology.orchestra.run.vm02.stdout:Remove 3 Packages 2026-03-10T09:59:53.817 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:59:53.817 INFO:teuthology.orchestra.run.vm02.stdout:Freed space: 3.7 M 2026-03-10T09:59:53.817 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction check 2026-03-10T09:59:53.817 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T09:59:53.817 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-10T09:59:53.817 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-10T09:59:53.819 INFO:teuthology.orchestra.run.vm02.stdout:Transaction check succeeded. 2026-03-10T09:59:53.819 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction test 2026-03-10T09:59:53.835 INFO:teuthology.orchestra.run.vm02.stdout:Transaction test succeeded. 2026-03-10T09:59:53.835 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction 2026-03-10T09:59:53.857 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T09:59:53.857 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:59:53.857 INFO:teuthology.orchestra.run.vm08.stdout:Removed: 2026-03-10T09:59:53.857 INFO:teuthology.orchestra.run.vm08.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:53.857 INFO:teuthology.orchestra.run.vm08.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:53.857 INFO:teuthology.orchestra.run.vm08.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:53.857 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:59:53.857 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T09:59:53.865 INFO:teuthology.orchestra.run.vm02.stdout: Preparing : 1/1 2026-03-10T09:59:53.867 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-10T09:59:53.869 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-10T09:59:53.869 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T09:59:53.934 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T09:59:53.934 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-10T09:59:53.934 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-10T09:59:53.943 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: libcephfs-devel 2026-03-10T09:59:53.944 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T09:59:53.947 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T09:59:53.948 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T09:59:53.948 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T09:59:53.975 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T09:59:53.975 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:59:53.975 INFO:teuthology.orchestra.run.vm02.stdout:Removed: 2026-03-10T09:59:53.975 INFO:teuthology.orchestra.run.vm02.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:53.975 INFO:teuthology.orchestra.run.vm02.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:53.975 INFO:teuthology.orchestra.run.vm02.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:53.975 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:59:53.975 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T09:59:54.034 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: libcephfs-devel 2026-03-10T09:59:54.034 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T09:59:54.037 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T09:59:54.038 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T09:59:54.038 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T09:59:54.129 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T09:59:54.130 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T09:59:54.130 INFO:teuthology.orchestra.run.vm01.stdout: Package Arch Version Repository Size 2026-03-10T09:59:54.130 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T09:59:54.130 INFO:teuthology.orchestra.run.vm01.stdout:Removing: 2026-03-10T09:59:54.131 INFO:teuthology.orchestra.run.vm01.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 12 M 2026-03-10T09:59:54.131 INFO:teuthology.orchestra.run.vm01.stdout:Removing dependent packages: 2026-03-10T09:59:54.131 INFO:teuthology.orchestra.run.vm01.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-10T09:59:54.131 INFO:teuthology.orchestra.run.vm01.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-10T09:59:54.131 INFO:teuthology.orchestra.run.vm01.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 265 k 2026-03-10T09:59:54.131 INFO:teuthology.orchestra.run.vm01.stdout: qemu-kvm-block-rbd x86_64 17:10.1.0-15.el9 @appstream 37 k 2026-03-10T09:59:54.131 INFO:teuthology.orchestra.run.vm01.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 227 k 2026-03-10T09:59:54.131 INFO:teuthology.orchestra.run.vm01.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 490 k 2026-03-10T09:59:54.131 INFO:teuthology.orchestra.run.vm01.stdout:Removing unused dependencies: 2026-03-10T09:59:54.131 INFO:teuthology.orchestra.run.vm01.stdout: boost-program-options x86_64 1.75.0-13.el9 @appstream 276 k 2026-03-10T09:59:54.131 INFO:teuthology.orchestra.run.vm01.stdout: libarrow x86_64 9.0.0-15.el9 @epel 18 M 2026-03-10T09:59:54.131 INFO:teuthology.orchestra.run.vm01.stdout: libarrow-doc noarch 9.0.0-15.el9 @epel 122 k 2026-03-10T09:59:54.131 INFO:teuthology.orchestra.run.vm01.stdout: libnbd x86_64 1.20.3-4.el9 @appstream 453 k 2026-03-10T09:59:54.131 INFO:teuthology.orchestra.run.vm01.stdout: libpmemobj x86_64 1.12.1-1.el9 @appstream 383 k 2026-03-10T09:59:54.131 INFO:teuthology.orchestra.run.vm01.stdout: librabbitmq x86_64 0.11.0-7.el9 @appstream 102 k 2026-03-10T09:59:54.131 INFO:teuthology.orchestra.run.vm01.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-10T09:59:54.131 INFO:teuthology.orchestra.run.vm01.stdout: librdkafka x86_64 1.6.1-102.el9 @appstream 2.0 M 2026-03-10T09:59:54.131 INFO:teuthology.orchestra.run.vm01.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 19 M 2026-03-10T09:59:54.131 INFO:teuthology.orchestra.run.vm01.stdout: lttng-ust x86_64 2.12.0-6.el9 @appstream 1.0 M 2026-03-10T09:59:54.131 INFO:teuthology.orchestra.run.vm01.stdout: parquet-libs x86_64 9.0.0-15.el9 @epel 2.8 M 2026-03-10T09:59:54.131 INFO:teuthology.orchestra.run.vm01.stdout: re2 x86_64 1:20211101-20.el9 @epel 472 k 2026-03-10T09:59:54.131 INFO:teuthology.orchestra.run.vm01.stdout: thrift x86_64 0.15.0-4.el9 @epel 4.8 M 2026-03-10T09:59:54.131 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:59:54.131 INFO:teuthology.orchestra.run.vm01.stdout:Transaction Summary 2026-03-10T09:59:54.131 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T09:59:54.131 INFO:teuthology.orchestra.run.vm01.stdout:Remove 20 Packages 2026-03-10T09:59:54.131 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:59:54.131 INFO:teuthology.orchestra.run.vm01.stdout:Freed space: 79 M 2026-03-10T09:59:54.131 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction check 2026-03-10T09:59:54.135 INFO:teuthology.orchestra.run.vm01.stdout:Transaction check succeeded. 2026-03-10T09:59:54.135 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction test 2026-03-10T09:59:54.142 INFO:teuthology.orchestra.run.vm02.stdout:No match for argument: libcephfs-devel 2026-03-10T09:59:54.142 INFO:teuthology.orchestra.run.vm02.stderr:No packages marked for removal. 2026-03-10T09:59:54.146 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T09:59:54.146 INFO:teuthology.orchestra.run.vm02.stdout:Nothing to do. 2026-03-10T09:59:54.146 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T09:59:54.157 INFO:teuthology.orchestra.run.vm01.stdout:Transaction test succeeded. 2026-03-10T09:59:54.157 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction 2026-03-10T09:59:54.201 INFO:teuthology.orchestra.run.vm01.stdout: Preparing : 1/1 2026-03-10T09:59:54.204 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 1/20 2026-03-10T09:59:54.206 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2/20 2026-03-10T09:59:54.209 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 3/20 2026-03-10T09:59:54.209 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-10T09:59:54.214 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T09:59:54.215 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T09:59:54.215 INFO:teuthology.orchestra.run.vm08.stdout: Package Arch Version Repository Size 2026-03-10T09:59:54.215 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T09:59:54.215 INFO:teuthology.orchestra.run.vm08.stdout:Removing: 2026-03-10T09:59:54.215 INFO:teuthology.orchestra.run.vm08.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 12 M 2026-03-10T09:59:54.215 INFO:teuthology.orchestra.run.vm08.stdout:Removing dependent packages: 2026-03-10T09:59:54.215 INFO:teuthology.orchestra.run.vm08.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-10T09:59:54.215 INFO:teuthology.orchestra.run.vm08.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-10T09:59:54.215 INFO:teuthology.orchestra.run.vm08.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 265 k 2026-03-10T09:59:54.216 INFO:teuthology.orchestra.run.vm08.stdout: qemu-kvm-block-rbd x86_64 17:10.1.0-15.el9 @appstream 37 k 2026-03-10T09:59:54.216 INFO:teuthology.orchestra.run.vm08.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 227 k 2026-03-10T09:59:54.216 INFO:teuthology.orchestra.run.vm08.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 490 k 2026-03-10T09:59:54.216 INFO:teuthology.orchestra.run.vm08.stdout:Removing unused dependencies: 2026-03-10T09:59:54.216 INFO:teuthology.orchestra.run.vm08.stdout: boost-program-options x86_64 1.75.0-13.el9 @appstream 276 k 2026-03-10T09:59:54.216 INFO:teuthology.orchestra.run.vm08.stdout: libarrow x86_64 9.0.0-15.el9 @epel 18 M 2026-03-10T09:59:54.216 INFO:teuthology.orchestra.run.vm08.stdout: libarrow-doc noarch 9.0.0-15.el9 @epel 122 k 2026-03-10T09:59:54.216 INFO:teuthology.orchestra.run.vm08.stdout: libnbd x86_64 1.20.3-4.el9 @appstream 453 k 2026-03-10T09:59:54.216 INFO:teuthology.orchestra.run.vm08.stdout: libpmemobj x86_64 1.12.1-1.el9 @appstream 383 k 2026-03-10T09:59:54.216 INFO:teuthology.orchestra.run.vm08.stdout: librabbitmq x86_64 0.11.0-7.el9 @appstream 102 k 2026-03-10T09:59:54.216 INFO:teuthology.orchestra.run.vm08.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-10T09:59:54.216 INFO:teuthology.orchestra.run.vm08.stdout: librdkafka x86_64 1.6.1-102.el9 @appstream 2.0 M 2026-03-10T09:59:54.216 INFO:teuthology.orchestra.run.vm08.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 19 M 2026-03-10T09:59:54.216 INFO:teuthology.orchestra.run.vm08.stdout: lttng-ust x86_64 2.12.0-6.el9 @appstream 1.0 M 2026-03-10T09:59:54.216 INFO:teuthology.orchestra.run.vm08.stdout: parquet-libs x86_64 9.0.0-15.el9 @epel 2.8 M 2026-03-10T09:59:54.216 INFO:teuthology.orchestra.run.vm08.stdout: re2 x86_64 1:20211101-20.el9 @epel 472 k 2026-03-10T09:59:54.216 INFO:teuthology.orchestra.run.vm08.stdout: thrift x86_64 0.15.0-4.el9 @epel 4.8 M 2026-03-10T09:59:54.216 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:59:54.216 INFO:teuthology.orchestra.run.vm08.stdout:Transaction Summary 2026-03-10T09:59:54.216 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T09:59:54.216 INFO:teuthology.orchestra.run.vm08.stdout:Remove 20 Packages 2026-03-10T09:59:54.216 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:59:54.216 INFO:teuthology.orchestra.run.vm08.stdout:Freed space: 79 M 2026-03-10T09:59:54.216 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction check 2026-03-10T09:59:54.220 INFO:teuthology.orchestra.run.vm08.stdout:Transaction check succeeded. 2026-03-10T09:59:54.220 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction test 2026-03-10T09:59:54.222 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-10T09:59:54.224 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : parquet-libs-9.0.0-15.el9.x86_64 5/20 2026-03-10T09:59:54.226 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 6/20 2026-03-10T09:59:54.228 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-10T09:59:54.230 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 8/20 2026-03-10T09:59:54.233 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : libarrow-doc-9.0.0-15.el9.noarch 9/20 2026-03-10T09:59:54.233 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T09:59:54.240 INFO:teuthology.orchestra.run.vm08.stdout:Transaction test succeeded. 2026-03-10T09:59:54.241 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction 2026-03-10T09:59:54.249 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T09:59:54.249 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-10T09:59:54.249 INFO:teuthology.orchestra.run.vm01.stdout:warning: file /etc/ceph: remove failed: No such file or directory 2026-03-10T09:59:54.249 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:59:54.263 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-10T09:59:54.265 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : libarrow-9.0.0-15.el9.x86_64 12/20 2026-03-10T09:59:54.268 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : re2-1:20211101-20.el9.x86_64 13/20 2026-03-10T09:59:54.271 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : lttng-ust-2.12.0-6.el9.x86_64 14/20 2026-03-10T09:59:54.274 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : thrift-0.15.0-4.el9.x86_64 15/20 2026-03-10T09:59:54.277 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : libnbd-1.20.3-4.el9.x86_64 16/20 2026-03-10T09:59:54.279 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : libpmemobj-1.12.1-1.el9.x86_64 17/20 2026-03-10T09:59:54.281 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : boost-program-options-1.75.0-13.el9.x86_64 18/20 2026-03-10T09:59:54.282 INFO:teuthology.orchestra.run.vm08.stdout: Preparing : 1/1 2026-03-10T09:59:54.283 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : librabbitmq-0.11.0-7.el9.x86_64 19/20 2026-03-10T09:59:54.285 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 1/20 2026-03-10T09:59:54.287 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2/20 2026-03-10T09:59:54.290 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 3/20 2026-03-10T09:59:54.290 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-10T09:59:54.298 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-10T09:59:54.303 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-10T09:59:54.305 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : parquet-libs-9.0.0-15.el9.x86_64 5/20 2026-03-10T09:59:54.307 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 6/20 2026-03-10T09:59:54.309 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-10T09:59:54.310 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 8/20 2026-03-10T09:59:54.313 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libarrow-doc-9.0.0-15.el9.noarch 9/20 2026-03-10T09:59:54.313 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T09:59:54.327 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T09:59:54.327 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-10T09:59:54.327 INFO:teuthology.orchestra.run.vm08.stdout:warning: file /etc/ceph: remove failed: No such file or directory 2026-03-10T09:59:54.327 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:59:54.328 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T09:59:54.329 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T09:59:54.329 INFO:teuthology.orchestra.run.vm02.stdout: Package Arch Version Repository Size 2026-03-10T09:59:54.329 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T09:59:54.329 INFO:teuthology.orchestra.run.vm02.stdout:Removing: 2026-03-10T09:59:54.329 INFO:teuthology.orchestra.run.vm02.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 12 M 2026-03-10T09:59:54.329 INFO:teuthology.orchestra.run.vm02.stdout:Removing dependent packages: 2026-03-10T09:59:54.329 INFO:teuthology.orchestra.run.vm02.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-10T09:59:54.329 INFO:teuthology.orchestra.run.vm02.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-10T09:59:54.329 INFO:teuthology.orchestra.run.vm02.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 265 k 2026-03-10T09:59:54.329 INFO:teuthology.orchestra.run.vm02.stdout: qemu-kvm-block-rbd x86_64 17:10.1.0-15.el9 @appstream 37 k 2026-03-10T09:59:54.329 INFO:teuthology.orchestra.run.vm02.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 227 k 2026-03-10T09:59:54.329 INFO:teuthology.orchestra.run.vm02.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 490 k 2026-03-10T09:59:54.329 INFO:teuthology.orchestra.run.vm02.stdout:Removing unused dependencies: 2026-03-10T09:59:54.329 INFO:teuthology.orchestra.run.vm02.stdout: boost-program-options x86_64 1.75.0-13.el9 @appstream 276 k 2026-03-10T09:59:54.329 INFO:teuthology.orchestra.run.vm02.stdout: libarrow x86_64 9.0.0-15.el9 @epel 18 M 2026-03-10T09:59:54.329 INFO:teuthology.orchestra.run.vm02.stdout: libarrow-doc noarch 9.0.0-15.el9 @epel 122 k 2026-03-10T09:59:54.329 INFO:teuthology.orchestra.run.vm02.stdout: libnbd x86_64 1.20.3-4.el9 @appstream 453 k 2026-03-10T09:59:54.329 INFO:teuthology.orchestra.run.vm02.stdout: libpmemobj x86_64 1.12.1-1.el9 @appstream 383 k 2026-03-10T09:59:54.329 INFO:teuthology.orchestra.run.vm02.stdout: librabbitmq x86_64 0.11.0-7.el9 @appstream 102 k 2026-03-10T09:59:54.329 INFO:teuthology.orchestra.run.vm02.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-10T09:59:54.329 INFO:teuthology.orchestra.run.vm02.stdout: librdkafka x86_64 1.6.1-102.el9 @appstream 2.0 M 2026-03-10T09:59:54.329 INFO:teuthology.orchestra.run.vm02.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 19 M 2026-03-10T09:59:54.329 INFO:teuthology.orchestra.run.vm02.stdout: lttng-ust x86_64 2.12.0-6.el9 @appstream 1.0 M 2026-03-10T09:59:54.329 INFO:teuthology.orchestra.run.vm02.stdout: parquet-libs x86_64 9.0.0-15.el9 @epel 2.8 M 2026-03-10T09:59:54.329 INFO:teuthology.orchestra.run.vm02.stdout: re2 x86_64 1:20211101-20.el9 @epel 472 k 2026-03-10T09:59:54.329 INFO:teuthology.orchestra.run.vm02.stdout: thrift x86_64 0.15.0-4.el9 @epel 4.8 M 2026-03-10T09:59:54.329 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:59:54.329 INFO:teuthology.orchestra.run.vm02.stdout:Transaction Summary 2026-03-10T09:59:54.329 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T09:59:54.329 INFO:teuthology.orchestra.run.vm02.stdout:Remove 20 Packages 2026-03-10T09:59:54.330 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:59:54.330 INFO:teuthology.orchestra.run.vm02.stdout:Freed space: 79 M 2026-03-10T09:59:54.330 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction check 2026-03-10T09:59:54.333 INFO:teuthology.orchestra.run.vm02.stdout:Transaction check succeeded. 2026-03-10T09:59:54.333 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction test 2026-03-10T09:59:54.340 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-10T09:59:54.343 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libarrow-9.0.0-15.el9.x86_64 12/20 2026-03-10T09:59:54.346 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : re2-1:20211101-20.el9.x86_64 13/20 2026-03-10T09:59:54.349 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : lttng-ust-2.12.0-6.el9.x86_64 14/20 2026-03-10T09:59:54.352 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : thrift-0.15.0-4.el9.x86_64 15/20 2026-03-10T09:59:54.355 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-10T09:59:54.355 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 1/20 2026-03-10T09:59:54.355 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 2/20 2026-03-10T09:59:54.355 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 3/20 2026-03-10T09:59:54.355 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 4/20 2026-03-10T09:59:54.356 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 5/20 2026-03-10T09:59:54.356 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 6/20 2026-03-10T09:59:54.356 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-10T09:59:54.356 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 8/20 2026-03-10T09:59:54.356 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 9/20 2026-03-10T09:59:54.356 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libnbd-1.20.3-4.el9.x86_64 16/20 2026-03-10T09:59:54.356 INFO:teuthology.orchestra.run.vm02.stdout:Transaction test succeeded. 2026-03-10T09:59:54.356 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction 2026-03-10T09:59:54.356 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T09:59:54.356 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 11/20 2026-03-10T09:59:54.356 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 12/20 2026-03-10T09:59:54.356 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 13/20 2026-03-10T09:59:54.356 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 14/20 2026-03-10T09:59:54.356 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 15/20 2026-03-10T09:59:54.356 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 16/20 2026-03-10T09:59:54.356 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 17/20 2026-03-10T09:59:54.356 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 18/20 2026-03-10T09:59:54.356 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : re2-1:20211101-20.el9.x86_64 19/20 2026-03-10T09:59:54.357 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libpmemobj-1.12.1-1.el9.x86_64 17/20 2026-03-10T09:59:54.359 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : boost-program-options-1.75.0-13.el9.x86_64 18/20 2026-03-10T09:59:54.361 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : librabbitmq-0.11.0-7.el9.x86_64 19/20 2026-03-10T09:59:54.376 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-10T09:59:54.403 INFO:teuthology.orchestra.run.vm02.stdout: Preparing : 1/1 2026-03-10T09:59:54.403 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 20/20 2026-03-10T09:59:54.403 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:59:54.403 INFO:teuthology.orchestra.run.vm01.stdout:Removed: 2026-03-10T09:59:54.403 INFO:teuthology.orchestra.run.vm01.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-10T09:59:54.403 INFO:teuthology.orchestra.run.vm01.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-10T09:59:54.403 INFO:teuthology.orchestra.run.vm01.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-10T09:59:54.403 INFO:teuthology.orchestra.run.vm01.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-10T09:59:54.403 INFO:teuthology.orchestra.run.vm01.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-10T09:59:54.403 INFO:teuthology.orchestra.run.vm01.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-10T09:59:54.403 INFO:teuthology.orchestra.run.vm01.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:54.403 INFO:teuthology.orchestra.run.vm01.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:54.403 INFO:teuthology.orchestra.run.vm01.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-10T09:59:54.403 INFO:teuthology.orchestra.run.vm01.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:54.403 INFO:teuthology.orchestra.run.vm01.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-10T09:59:54.403 INFO:teuthology.orchestra.run.vm01.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-10T09:59:54.403 INFO:teuthology.orchestra.run.vm01.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:54.403 INFO:teuthology.orchestra.run.vm01.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:54.403 INFO:teuthology.orchestra.run.vm01.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:54.403 INFO:teuthology.orchestra.run.vm01.stdout: qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 2026-03-10T09:59:54.403 INFO:teuthology.orchestra.run.vm01.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:54.403 INFO:teuthology.orchestra.run.vm01.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:54.403 INFO:teuthology.orchestra.run.vm01.stdout: re2-1:20211101-20.el9.x86_64 2026-03-10T09:59:54.403 INFO:teuthology.orchestra.run.vm01.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-10T09:59:54.403 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:59:54.403 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T09:59:54.405 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 1/20 2026-03-10T09:59:54.407 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2/20 2026-03-10T09:59:54.411 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 3/20 2026-03-10T09:59:54.411 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-10T09:59:54.425 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-10T09:59:54.427 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : parquet-libs-9.0.0-15.el9.x86_64 5/20 2026-03-10T09:59:54.429 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 6/20 2026-03-10T09:59:54.430 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-10T09:59:54.432 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 8/20 2026-03-10T09:59:54.434 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : libarrow-doc-9.0.0-15.el9.noarch 9/20 2026-03-10T09:59:54.434 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T09:59:54.444 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-10T09:59:54.444 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 1/20 2026-03-10T09:59:54.444 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 2/20 2026-03-10T09:59:54.444 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 3/20 2026-03-10T09:59:54.444 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 4/20 2026-03-10T09:59:54.445 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 5/20 2026-03-10T09:59:54.445 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 6/20 2026-03-10T09:59:54.445 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-10T09:59:54.445 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 8/20 2026-03-10T09:59:54.445 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 9/20 2026-03-10T09:59:54.445 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T09:59:54.445 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 11/20 2026-03-10T09:59:54.445 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 12/20 2026-03-10T09:59:54.445 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 13/20 2026-03-10T09:59:54.445 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 14/20 2026-03-10T09:59:54.445 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 15/20 2026-03-10T09:59:54.445 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 16/20 2026-03-10T09:59:54.445 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 17/20 2026-03-10T09:59:54.445 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 18/20 2026-03-10T09:59:54.445 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : re2-1:20211101-20.el9.x86_64 19/20 2026-03-10T09:59:54.448 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T09:59:54.449 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-10T09:59:54.449 INFO:teuthology.orchestra.run.vm02.stdout:warning: file /etc/ceph: remove failed: No such file or directory 2026-03-10T09:59:54.449 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:59:54.462 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-10T09:59:54.465 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : libarrow-9.0.0-15.el9.x86_64 12/20 2026-03-10T09:59:54.469 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : re2-1:20211101-20.el9.x86_64 13/20 2026-03-10T09:59:54.473 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : lttng-ust-2.12.0-6.el9.x86_64 14/20 2026-03-10T09:59:54.476 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : thrift-0.15.0-4.el9.x86_64 15/20 2026-03-10T09:59:54.479 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : libnbd-1.20.3-4.el9.x86_64 16/20 2026-03-10T09:59:54.481 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : libpmemobj-1.12.1-1.el9.x86_64 17/20 2026-03-10T09:59:54.483 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : boost-program-options-1.75.0-13.el9.x86_64 18/20 2026-03-10T09:59:54.486 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : librabbitmq-0.11.0-7.el9.x86_64 19/20 2026-03-10T09:59:54.490 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 20/20 2026-03-10T09:59:54.490 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:59:54.490 INFO:teuthology.orchestra.run.vm08.stdout:Removed: 2026-03-10T09:59:54.490 INFO:teuthology.orchestra.run.vm08.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-10T09:59:54.490 INFO:teuthology.orchestra.run.vm08.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-10T09:59:54.490 INFO:teuthology.orchestra.run.vm08.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-10T09:59:54.490 INFO:teuthology.orchestra.run.vm08.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-10T09:59:54.490 INFO:teuthology.orchestra.run.vm08.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-10T09:59:54.490 INFO:teuthology.orchestra.run.vm08.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-10T09:59:54.490 INFO:teuthology.orchestra.run.vm08.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:54.490 INFO:teuthology.orchestra.run.vm08.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:54.491 INFO:teuthology.orchestra.run.vm08.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-10T09:59:54.491 INFO:teuthology.orchestra.run.vm08.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:54.491 INFO:teuthology.orchestra.run.vm08.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-10T09:59:54.491 INFO:teuthology.orchestra.run.vm08.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-10T09:59:54.491 INFO:teuthology.orchestra.run.vm08.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:54.491 INFO:teuthology.orchestra.run.vm08.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:54.491 INFO:teuthology.orchestra.run.vm08.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:54.491 INFO:teuthology.orchestra.run.vm08.stdout: qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 2026-03-10T09:59:54.491 INFO:teuthology.orchestra.run.vm08.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:54.491 INFO:teuthology.orchestra.run.vm08.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:54.491 INFO:teuthology.orchestra.run.vm08.stdout: re2-1:20211101-20.el9.x86_64 2026-03-10T09:59:54.491 INFO:teuthology.orchestra.run.vm08.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-10T09:59:54.491 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:59:54.491 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T09:59:54.506 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-10T09:59:54.566 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-10T09:59:54.566 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 1/20 2026-03-10T09:59:54.566 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 2/20 2026-03-10T09:59:54.566 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 3/20 2026-03-10T09:59:54.566 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 4/20 2026-03-10T09:59:54.567 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 5/20 2026-03-10T09:59:54.567 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 6/20 2026-03-10T09:59:54.567 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-10T09:59:54.567 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 8/20 2026-03-10T09:59:54.567 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 9/20 2026-03-10T09:59:54.567 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T09:59:54.567 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 11/20 2026-03-10T09:59:54.567 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 12/20 2026-03-10T09:59:54.567 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 13/20 2026-03-10T09:59:54.567 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 14/20 2026-03-10T09:59:54.567 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 15/20 2026-03-10T09:59:54.567 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 16/20 2026-03-10T09:59:54.567 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 17/20 2026-03-10T09:59:54.567 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 18/20 2026-03-10T09:59:54.567 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : re2-1:20211101-20.el9.x86_64 19/20 2026-03-10T09:59:54.607 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 20/20 2026-03-10T09:59:54.607 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:59:54.607 INFO:teuthology.orchestra.run.vm02.stdout:Removed: 2026-03-10T09:59:54.607 INFO:teuthology.orchestra.run.vm02.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-10T09:59:54.607 INFO:teuthology.orchestra.run.vm02.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-10T09:59:54.607 INFO:teuthology.orchestra.run.vm02.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-10T09:59:54.607 INFO:teuthology.orchestra.run.vm02.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-10T09:59:54.607 INFO:teuthology.orchestra.run.vm02.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-10T09:59:54.607 INFO:teuthology.orchestra.run.vm02.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-10T09:59:54.607 INFO:teuthology.orchestra.run.vm02.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:54.607 INFO:teuthology.orchestra.run.vm02.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:54.607 INFO:teuthology.orchestra.run.vm02.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-10T09:59:54.607 INFO:teuthology.orchestra.run.vm02.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:54.607 INFO:teuthology.orchestra.run.vm02.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-10T09:59:54.607 INFO:teuthology.orchestra.run.vm02.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-10T09:59:54.607 INFO:teuthology.orchestra.run.vm02.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:54.607 INFO:teuthology.orchestra.run.vm02.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:54.607 INFO:teuthology.orchestra.run.vm02.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:54.607 INFO:teuthology.orchestra.run.vm02.stdout: qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 2026-03-10T09:59:54.607 INFO:teuthology.orchestra.run.vm02.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:54.607 INFO:teuthology.orchestra.run.vm02.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:59:54.607 INFO:teuthology.orchestra.run.vm02.stdout: re2-1:20211101-20.el9.x86_64 2026-03-10T09:59:54.607 INFO:teuthology.orchestra.run.vm02.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-10T09:59:54.607 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T09:59:54.607 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T09:59:54.633 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: librbd1 2026-03-10T09:59:54.633 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T09:59:54.635 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T09:59:54.635 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T09:59:54.635 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T09:59:54.707 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: librbd1 2026-03-10T09:59:54.708 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T09:59:54.711 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T09:59:54.712 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T09:59:54.712 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T09:59:54.831 INFO:teuthology.orchestra.run.vm02.stdout:No match for argument: librbd1 2026-03-10T09:59:54.831 INFO:teuthology.orchestra.run.vm02.stderr:No packages marked for removal. 2026-03-10T09:59:54.833 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T09:59:54.834 INFO:teuthology.orchestra.run.vm02.stdout:Nothing to do. 2026-03-10T09:59:54.834 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T09:59:54.839 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: python3-rados 2026-03-10T09:59:54.839 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T09:59:54.841 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T09:59:54.842 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T09:59:54.842 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T09:59:54.922 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: python3-rados 2026-03-10T09:59:54.923 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T09:59:54.924 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T09:59:54.925 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T09:59:54.925 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T09:59:55.003 INFO:teuthology.orchestra.run.vm02.stdout:No match for argument: python3-rados 2026-03-10T09:59:55.003 INFO:teuthology.orchestra.run.vm02.stderr:No packages marked for removal. 2026-03-10T09:59:55.005 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T09:59:55.006 INFO:teuthology.orchestra.run.vm02.stdout:Nothing to do. 2026-03-10T09:59:55.006 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T09:59:55.007 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: python3-rgw 2026-03-10T09:59:55.007 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T09:59:55.009 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T09:59:55.010 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T09:59:55.010 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T09:59:55.088 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: python3-rgw 2026-03-10T09:59:55.089 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T09:59:55.091 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T09:59:55.091 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T09:59:55.091 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T09:59:55.161 INFO:teuthology.orchestra.run.vm02.stdout:No match for argument: python3-rgw 2026-03-10T09:59:55.161 INFO:teuthology.orchestra.run.vm02.stderr:No packages marked for removal. 2026-03-10T09:59:55.163 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T09:59:55.164 INFO:teuthology.orchestra.run.vm02.stdout:Nothing to do. 2026-03-10T09:59:55.164 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T09:59:55.177 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: python3-cephfs 2026-03-10T09:59:55.177 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T09:59:55.179 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T09:59:55.180 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T09:59:55.180 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T09:59:55.263 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: python3-cephfs 2026-03-10T09:59:55.263 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T09:59:55.265 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T09:59:55.266 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T09:59:55.266 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T09:59:55.334 INFO:teuthology.orchestra.run.vm02.stdout:No match for argument: python3-cephfs 2026-03-10T09:59:55.334 INFO:teuthology.orchestra.run.vm02.stderr:No packages marked for removal. 2026-03-10T09:59:55.336 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T09:59:55.337 INFO:teuthology.orchestra.run.vm02.stdout:Nothing to do. 2026-03-10T09:59:55.337 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T09:59:55.356 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: python3-rbd 2026-03-10T09:59:55.356 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T09:59:55.358 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T09:59:55.358 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T09:59:55.358 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T09:59:55.449 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: python3-rbd 2026-03-10T09:59:55.449 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T09:59:55.452 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T09:59:55.453 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T09:59:55.453 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T09:59:55.501 INFO:teuthology.orchestra.run.vm02.stdout:No match for argument: python3-rbd 2026-03-10T09:59:55.501 INFO:teuthology.orchestra.run.vm02.stderr:No packages marked for removal. 2026-03-10T09:59:55.504 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T09:59:55.504 INFO:teuthology.orchestra.run.vm02.stdout:Nothing to do. 2026-03-10T09:59:55.504 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T09:59:55.545 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: rbd-fuse 2026-03-10T09:59:55.545 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T09:59:55.547 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T09:59:55.547 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T09:59:55.547 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T09:59:55.608 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: rbd-fuse 2026-03-10T09:59:55.608 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T09:59:55.610 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T09:59:55.610 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T09:59:55.610 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T09:59:55.672 INFO:teuthology.orchestra.run.vm02.stdout:No match for argument: rbd-fuse 2026-03-10T09:59:55.672 INFO:teuthology.orchestra.run.vm02.stderr:No packages marked for removal. 2026-03-10T09:59:55.674 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T09:59:55.674 INFO:teuthology.orchestra.run.vm02.stdout:Nothing to do. 2026-03-10T09:59:55.674 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T09:59:55.715 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: rbd-mirror 2026-03-10T09:59:55.715 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T09:59:55.717 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T09:59:55.717 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T09:59:55.717 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T09:59:55.776 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: rbd-mirror 2026-03-10T09:59:55.776 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T09:59:55.778 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T09:59:55.778 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T09:59:55.778 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T09:59:55.844 INFO:teuthology.orchestra.run.vm02.stdout:No match for argument: rbd-mirror 2026-03-10T09:59:55.844 INFO:teuthology.orchestra.run.vm02.stderr:No packages marked for removal. 2026-03-10T09:59:55.846 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T09:59:55.846 INFO:teuthology.orchestra.run.vm02.stdout:Nothing to do. 2026-03-10T09:59:55.846 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T09:59:55.892 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: rbd-nbd 2026-03-10T09:59:55.892 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T09:59:55.894 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T09:59:55.894 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T09:59:55.894 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T09:59:55.913 DEBUG:teuthology.orchestra.run.vm01:> sudo yum clean all 2026-03-10T09:59:55.944 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: rbd-nbd 2026-03-10T09:59:55.944 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T09:59:55.947 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T09:59:55.947 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T09:59:55.947 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T09:59:55.972 DEBUG:teuthology.orchestra.run.vm08:> sudo yum clean all 2026-03-10T09:59:56.019 INFO:teuthology.orchestra.run.vm02.stdout:No match for argument: rbd-nbd 2026-03-10T09:59:56.019 INFO:teuthology.orchestra.run.vm02.stderr:No packages marked for removal. 2026-03-10T09:59:56.022 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T09:59:56.022 INFO:teuthology.orchestra.run.vm02.stdout:Nothing to do. 2026-03-10T09:59:56.022 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T09:59:56.045 DEBUG:teuthology.orchestra.run.vm02:> sudo yum clean all 2026-03-10T09:59:56.050 INFO:teuthology.orchestra.run.vm01.stdout:56 files removed 2026-03-10T09:59:56.075 DEBUG:teuthology.orchestra.run.vm01:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-10T09:59:56.102 DEBUG:teuthology.orchestra.run.vm01:> sudo yum clean expire-cache 2026-03-10T09:59:56.107 INFO:teuthology.orchestra.run.vm08.stdout:56 files removed 2026-03-10T09:59:56.126 DEBUG:teuthology.orchestra.run.vm08:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-10T09:59:56.153 DEBUG:teuthology.orchestra.run.vm08:> sudo yum clean expire-cache 2026-03-10T09:59:56.180 INFO:teuthology.orchestra.run.vm02.stdout:56 files removed 2026-03-10T09:59:56.202 DEBUG:teuthology.orchestra.run.vm02:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-10T09:59:56.230 DEBUG:teuthology.orchestra.run.vm02:> sudo yum clean expire-cache 2026-03-10T09:59:56.257 INFO:teuthology.orchestra.run.vm01.stdout:Cache was expired 2026-03-10T09:59:56.257 INFO:teuthology.orchestra.run.vm01.stdout:0 files removed 2026-03-10T09:59:56.277 DEBUG:teuthology.parallel:result is None 2026-03-10T09:59:56.305 INFO:teuthology.orchestra.run.vm08.stdout:Cache was expired 2026-03-10T09:59:56.305 INFO:teuthology.orchestra.run.vm08.stdout:0 files removed 2026-03-10T09:59:56.325 DEBUG:teuthology.parallel:result is None 2026-03-10T09:59:56.384 INFO:teuthology.orchestra.run.vm02.stdout:Cache was expired 2026-03-10T09:59:56.384 INFO:teuthology.orchestra.run.vm02.stdout:0 files removed 2026-03-10T09:59:56.403 DEBUG:teuthology.parallel:result is None 2026-03-10T09:59:56.403 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm01.local 2026-03-10T09:59:56.403 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm02.local 2026-03-10T09:59:56.403 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm08.local 2026-03-10T09:59:56.403 DEBUG:teuthology.orchestra.run.vm01:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-10T09:59:56.404 DEBUG:teuthology.orchestra.run.vm02:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-10T09:59:56.404 DEBUG:teuthology.orchestra.run.vm08:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-10T09:59:56.430 DEBUG:teuthology.orchestra.run.vm08:> sudo mv -f /etc/yum/pluginconf.d/priorities.conf.orig /etc/yum/pluginconf.d/priorities.conf 2026-03-10T09:59:56.431 DEBUG:teuthology.orchestra.run.vm02:> sudo mv -f /etc/yum/pluginconf.d/priorities.conf.orig /etc/yum/pluginconf.d/priorities.conf 2026-03-10T09:59:56.432 DEBUG:teuthology.orchestra.run.vm01:> sudo mv -f /etc/yum/pluginconf.d/priorities.conf.orig /etc/yum/pluginconf.d/priorities.conf 2026-03-10T09:59:56.498 DEBUG:teuthology.parallel:result is None 2026-03-10T09:59:56.499 DEBUG:teuthology.parallel:result is None 2026-03-10T09:59:56.501 DEBUG:teuthology.parallel:result is None 2026-03-10T09:59:56.501 DEBUG:teuthology.run_tasks:Unwinding manager clock 2026-03-10T09:59:56.503 INFO:teuthology.task.clock:Checking final clock skew... 2026-03-10T09:59:56.503 DEBUG:teuthology.orchestra.run.vm01:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T09:59:56.543 DEBUG:teuthology.orchestra.run.vm02:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T09:59:56.544 DEBUG:teuthology.orchestra.run.vm08:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T09:59:56.555 INFO:teuthology.orchestra.run.vm01.stderr:bash: line 1: ntpq: command not found 2026-03-10T09:59:56.559 INFO:teuthology.orchestra.run.vm02.stderr:bash: line 1: ntpq: command not found 2026-03-10T09:59:56.560 INFO:teuthology.orchestra.run.vm08.stderr:bash: line 1: ntpq: command not found 2026-03-10T09:59:56.690 INFO:teuthology.orchestra.run.vm02.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T09:59:56.690 INFO:teuthology.orchestra.run.vm02.stdout:=============================================================================== 2026-03-10T09:59:56.690 INFO:teuthology.orchestra.run.vm02.stdout:^- static.119.109.140.128.c> 2 6 377 9 -150us[ -150us] +/- 53ms 2026-03-10T09:59:56.690 INFO:teuthology.orchestra.run.vm02.stdout:^* time2.sebhosting.de 2 6 377 9 -272us[ -397us] +/- 16ms 2026-03-10T09:59:56.690 INFO:teuthology.orchestra.run.vm02.stdout:^+ mail.light-speed.de 2 6 377 9 +294us[ +294us] +/- 18ms 2026-03-10T09:59:56.690 INFO:teuthology.orchestra.run.vm02.stdout:^+ 141.84.43.73 2 6 377 9 -1223us[-1223us] +/- 23ms 2026-03-10T09:59:56.690 INFO:teuthology.orchestra.run.vm08.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T09:59:56.691 INFO:teuthology.orchestra.run.vm08.stdout:=============================================================================== 2026-03-10T09:59:56.691 INFO:teuthology.orchestra.run.vm08.stdout:^+ 141.84.43.73 2 6 377 9 +824us[ +824us] +/- 22ms 2026-03-10T09:59:56.691 INFO:teuthology.orchestra.run.vm08.stdout:^- static.119.109.140.128.c> 2 6 377 9 +45us[ +45us] +/- 53ms 2026-03-10T09:59:56.691 INFO:teuthology.orchestra.run.vm08.stdout:^* time2.sebhosting.de 2 6 377 9 -60us[ +120us] +/- 16ms 2026-03-10T09:59:56.691 INFO:teuthology.orchestra.run.vm08.stdout:^+ mail.light-speed.de 2 6 377 10 +416us[ +596us] +/- 19ms 2026-03-10T09:59:56.691 INFO:teuthology.orchestra.run.vm01.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T09:59:56.691 INFO:teuthology.orchestra.run.vm01.stdout:=============================================================================== 2026-03-10T09:59:56.691 INFO:teuthology.orchestra.run.vm01.stdout:^+ 141.84.43.73 2 6 377 10 +1585us[+1713us] +/- 23ms 2026-03-10T09:59:56.691 INFO:teuthology.orchestra.run.vm01.stdout:^- static.119.109.140.128.c> 2 6 377 10 -445us[ -318us] +/- 53ms 2026-03-10T09:59:56.691 INFO:teuthology.orchestra.run.vm01.stdout:^* time2.sebhosting.de 2 6 377 9 -642us[ -514us] +/- 16ms 2026-03-10T09:59:56.691 INFO:teuthology.orchestra.run.vm01.stdout:^+ mail.light-speed.de 2 6 377 10 +51us[ +179us] +/- 18ms 2026-03-10T09:59:56.691 DEBUG:teuthology.run_tasks:Unwinding manager ansible.cephlab 2026-03-10T09:59:56.693 INFO:teuthology.task.ansible:Skipping ansible cleanup... 2026-03-10T09:59:56.694 DEBUG:teuthology.run_tasks:Unwinding manager selinux 2026-03-10T09:59:56.695 DEBUG:teuthology.run_tasks:Unwinding manager pcp 2026-03-10T09:59:56.697 DEBUG:teuthology.run_tasks:Unwinding manager internal.timer 2026-03-10T09:59:56.699 INFO:teuthology.task.internal:Duration was 470.493374 seconds 2026-03-10T09:59:56.699 DEBUG:teuthology.run_tasks:Unwinding manager internal.syslog 2026-03-10T09:59:56.701 INFO:teuthology.task.internal.syslog:Shutting down syslog monitoring... 2026-03-10T09:59:56.701 DEBUG:teuthology.orchestra.run.vm01:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-10T09:59:56.733 DEBUG:teuthology.orchestra.run.vm02:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-10T09:59:56.735 DEBUG:teuthology.orchestra.run.vm08:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-10T09:59:56.768 INFO:teuthology.orchestra.run.vm01.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T09:59:56.770 INFO:teuthology.orchestra.run.vm02.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T09:59:56.778 INFO:teuthology.orchestra.run.vm08.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T09:59:57.215 INFO:teuthology.task.internal.syslog:Checking logs for errors... 2026-03-10T09:59:57.215 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm01.local 2026-03-10T09:59:57.215 DEBUG:teuthology.orchestra.run.vm01:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-10T09:59:57.279 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm02.local 2026-03-10T09:59:57.279 DEBUG:teuthology.orchestra.run.vm02:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-10T09:59:57.302 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm08.local 2026-03-10T09:59:57.303 DEBUG:teuthology.orchestra.run.vm08:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-10T09:59:57.325 INFO:teuthology.task.internal.syslog:Gathering journactl... 2026-03-10T09:59:57.325 DEBUG:teuthology.orchestra.run.vm01:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T09:59:57.327 DEBUG:teuthology.orchestra.run.vm02:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T09:59:57.344 DEBUG:teuthology.orchestra.run.vm08:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T09:59:57.762 INFO:teuthology.task.internal.syslog:Compressing syslogs... 2026-03-10T09:59:57.763 DEBUG:teuthology.orchestra.run.vm01:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T09:59:57.764 DEBUG:teuthology.orchestra.run.vm02:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T09:59:57.766 DEBUG:teuthology.orchestra.run.vm08:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T09:59:57.786 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T09:59:57.787 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T09:59:57.787 INFO:teuthology.orchestra.run.vm01.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: gzip 0.0% -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T09:59:57.787 INFO:teuthology.orchestra.run.vm01.stderr: -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-10T09:59:57.787 INFO:teuthology.orchestra.run.vm01.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-10T09:59:57.789 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T09:59:57.789 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T09:59:57.789 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T09:59:57.790 INFO:teuthology.orchestra.run.vm08.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-10T09:59:57.790 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T09:59:57.790 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T09:59:57.790 INFO:teuthology.orchestra.run.vm02.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-10T09:59:57.790 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T09:59:57.790 INFO:teuthology.orchestra.run.vm08.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-10T09:59:57.790 INFO:teuthology.orchestra.run.vm02.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: /home/ubuntu/cephtest/archive/syslog/journalctl.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-10T09:59:57.918 INFO:teuthology.orchestra.run.vm02.stderr: 98.3% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-10T09:59:57.920 INFO:teuthology.orchestra.run.vm08.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 98.4% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-10T09:59:57.923 INFO:teuthology.orchestra.run.vm01.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 98.2% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-10T09:59:57.925 DEBUG:teuthology.run_tasks:Unwinding manager internal.sudo 2026-03-10T09:59:57.927 INFO:teuthology.task.internal:Restoring /etc/sudoers... 2026-03-10T09:59:57.928 DEBUG:teuthology.orchestra.run.vm01:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-10T09:59:57.991 DEBUG:teuthology.orchestra.run.vm02:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-10T09:59:58.015 DEBUG:teuthology.orchestra.run.vm08:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-10T09:59:58.044 DEBUG:teuthology.run_tasks:Unwinding manager internal.coredump 2026-03-10T09:59:58.047 DEBUG:teuthology.orchestra.run.vm01:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-10T09:59:58.049 DEBUG:teuthology.orchestra.run.vm02:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-10T09:59:58.057 DEBUG:teuthology.orchestra.run.vm08:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-10T09:59:58.070 INFO:teuthology.orchestra.run.vm01.stdout:kernel.core_pattern = core 2026-03-10T09:59:58.083 INFO:teuthology.orchestra.run.vm02.stdout:kernel.core_pattern = core 2026-03-10T09:59:58.113 INFO:teuthology.orchestra.run.vm08.stdout:kernel.core_pattern = core 2026-03-10T09:59:58.126 DEBUG:teuthology.orchestra.run.vm01:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-10T09:59:58.141 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T09:59:58.141 DEBUG:teuthology.orchestra.run.vm02:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-10T09:59:58.158 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T09:59:58.159 DEBUG:teuthology.orchestra.run.vm08:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-10T09:59:58.183 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T09:59:58.183 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive 2026-03-10T09:59:58.186 INFO:teuthology.task.internal:Transferring archived files... 2026-03-10T09:59:58.186 DEBUG:teuthology.misc:Transferring archived files from vm01:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/990/remote/vm01 2026-03-10T09:59:58.186 DEBUG:teuthology.orchestra.run.vm01:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-10T09:59:58.216 DEBUG:teuthology.misc:Transferring archived files from vm02:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/990/remote/vm02 2026-03-10T09:59:58.216 DEBUG:teuthology.orchestra.run.vm02:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-10T09:59:58.243 DEBUG:teuthology.misc:Transferring archived files from vm08:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/990/remote/vm08 2026-03-10T09:59:58.243 DEBUG:teuthology.orchestra.run.vm08:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-10T09:59:58.273 INFO:teuthology.task.internal:Removing archive directory... 2026-03-10T09:59:58.273 DEBUG:teuthology.orchestra.run.vm01:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-10T09:59:58.275 DEBUG:teuthology.orchestra.run.vm02:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-10T09:59:58.285 DEBUG:teuthology.orchestra.run.vm08:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-10T09:59:58.329 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive_upload 2026-03-10T09:59:58.332 INFO:teuthology.task.internal:Not uploading archives. 2026-03-10T09:59:58.332 DEBUG:teuthology.run_tasks:Unwinding manager internal.base 2026-03-10T09:59:58.335 INFO:teuthology.task.internal:Tidying up after the test... 2026-03-10T09:59:58.335 DEBUG:teuthology.orchestra.run.vm01:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-10T09:59:58.337 DEBUG:teuthology.orchestra.run.vm02:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-10T09:59:58.340 DEBUG:teuthology.orchestra.run.vm08:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-10T09:59:58.351 INFO:teuthology.orchestra.run.vm01.stdout: 8532144 0 drwxr-xr-x 2 ubuntu ubuntu 6 Mar 10 09:59 /home/ubuntu/cephtest 2026-03-10T09:59:58.353 INFO:teuthology.orchestra.run.vm02.stdout: 8532143 0 drwxr-xr-x 2 ubuntu ubuntu 6 Mar 10 09:59 /home/ubuntu/cephtest 2026-03-10T09:59:58.385 INFO:teuthology.orchestra.run.vm08.stdout: 8532145 0 drwxr-xr-x 2 ubuntu ubuntu 6 Mar 10 09:59 /home/ubuntu/cephtest 2026-03-10T09:59:58.386 DEBUG:teuthology.run_tasks:Unwinding manager console_log 2026-03-10T09:59:58.392 INFO:teuthology.run:Summary data: description: orch/cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_rgw_multisite} duration: 470.4933738708496 flavor: default owner: kyr success: true 2026-03-10T09:59:58.392 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-10T09:59:58.411 INFO:teuthology.run:pass