2026-03-10T06:36:41.984 INFO:root:teuthology version: 1.2.4.dev6+g1c580df7a 2026-03-10T06:36:41.989 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-10T06:36:42.007 INFO:teuthology.run:Config: archive_path: /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/932 branch: squid description: orch/cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_set_mon_crush_locations} email: null first_in_suite: false flavor: default job_id: '932' last_in_suite: false machine_type: vps name: kyr-2026-03-10_01:00:38-orch-squid-none-default-vps no_nested_subset: false os_type: centos os_version: 9.stream overrides: admin_socket: branch: squid ansible.cephlab: branch: main skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: timezone: UTC ceph: conf: global: mon election default strategy: 1 mgr: debug mgr: 20 debug ms: 1 mgr/cephadm/use_agent: false mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: debug ms: 1 debug osd: 20 osd mclock iops capacity threshold hdd: 49000 flavor: default log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) - MON_DOWN - POOL_APP_NOT_ENABLED - mon down - mons down - out of quorum - CEPHADM_FAILED_DAEMON log-only-match: - CEPHADM_ sha1: e911bdebe5c8faa3800735d1568fcdca65db60df ceph-deploy: conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: {} install: ceph: flavor: default sha1: e911bdebe5c8faa3800735d1568fcdca65db60df extra_system_packages: deb: - python3-xmltodict - python3-jmespath rpm: - bzip2 - perl-Test-Harness - python3-xmltodict - python3-jmespath selinux: allowlist: - scontext=system_u:system_r:logrotate_t:s0 - scontext=system_u:system_r:getty_t:s0 workunit: branch: tt-squid sha1: 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b owner: kyr priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - host.a - osd.0 - mon.a - mgr.a - - host.b - osd.1 - mon.b - mgr.b - - host.c - osd.2 - mon.c seed: 8043 sha1: e911bdebe5c8faa3800735d1568fcdca65db60df sleep_before_teardown: 0 subset: 1/64 suite: orch suite_branch: tt-squid suite_path: /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa suite_relpath: qa suite_repo: https://github.com/kshtsk/ceph.git suite_sha1: 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b targets: vm01.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLJidLAGtBdbRy8nQ/IZCyD/Gy9h5pqxTWS5KnTX0zyUJgnCE6d8ptnLvdZQ0FssUHIWWb3YHYHAXp51ngGPy1g= vm08.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGZukW8ZifjzQOWa43NsjwVpZb06knbUvYbboQtLMRgZVVAGE3xFQfgL76YJPpM0BKgbf4Ky/khudmbZ+bBjHm4= vm09.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDagq4EqMHg4o8/sBFD52jd9VwQXa1icau89kQ7THqAjS4dvR/bbZOKbLCjRJJtZQNlkG6pCWZO1WALOZjO9Lzc= tasks: - pexec: all: - sudo dnf remove nvme-cli -y - sudo dnf install nvmetcli nvme-cli -y - install: null - cephadm: null - cephadm.apply: specs: - placement: count: 3 service_id: foo service_type: mon spec: crush_locations: host.a: - datacenter=a host.b: - datacenter=b - rack=2 host.c: - datacenter=a - rack=3 - cephadm.shell: host.a: - "set -ex\n# since we don't know the real hostnames before the test, the next\n\ # bit is in order to replace the fake hostnames \"host.a/b/c\" with\n# the actual\ \ names cephadm knows the host by within the mon spec\nceph orch host ls --format\ \ json | jq -r '.[] | .hostname' > realnames\necho $'host.a\\nhost.b\\nhost.c'\ \ > fakenames\necho $'a\\nb\\nc' > mon_ids\necho $'{datacenter=a}\\n{datacenter=b,rack=2}\\\ n{datacenter=a,rack=3}' > crush_locs\nceph orch ls --service-name mon --export\ \ > mon.yaml\nMONSPEC=`cat mon.yaml`\necho \"$MONSPEC\"\nwhile read realname\ \ <&3 && read fakename <&4; do\n MONSPEC=\"${MONSPEC//$fakename/$realname}\"\ \ndone 3 mon.yaml\ncat mon.yaml\n\ # now the spec should have the real hostnames, so let's re-apply\nceph orch\ \ apply -i mon.yaml\nsleep 90\nceph orch ps --refresh\nceph orch ls --service-name\ \ mon --export > mon.yaml; ceph orch apply -i mon.yaml\nsleep 90\nceph mon dump\n\ ceph mon dump --format json\n# verify all the crush locations got set from \"\ ceph mon dump\" output\nwhile read monid <&3 && read crushloc <&4; do\n ceph\ \ mon dump --format json | jq --arg monid \"$monid\" --arg crushloc \"$crushloc\"\ \ -e '.mons | .[] | select(.name == $monid) | .crush_location == $crushloc'\n\ done 3, func=.kill_console_loggers at 0x7f43e9ba3e20>, signals=[15]) 2026-03-10T06:36:42.834 INFO:teuthology.run_tasks:Running task internal.connect... 2026-03-10T06:36:42.835 INFO:teuthology.task.internal:Opening connections... 2026-03-10T06:36:42.835 DEBUG:teuthology.task.internal:connecting to ubuntu@vm01.local 2026-03-10T06:36:42.836 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm01.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T06:36:42.896 DEBUG:teuthology.task.internal:connecting to ubuntu@vm08.local 2026-03-10T06:36:42.896 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm08.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T06:36:42.955 DEBUG:teuthology.task.internal:connecting to ubuntu@vm09.local 2026-03-10T06:36:42.955 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm09.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T06:36:43.014 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-03-10T06:36:43.016 DEBUG:teuthology.orchestra.run.vm01:> uname -m 2026-03-10T06:36:43.030 INFO:teuthology.orchestra.run.vm01.stdout:x86_64 2026-03-10T06:36:43.030 DEBUG:teuthology.orchestra.run.vm01:> cat /etc/os-release 2026-03-10T06:36:43.083 INFO:teuthology.orchestra.run.vm01.stdout:NAME="CentOS Stream" 2026-03-10T06:36:43.083 INFO:teuthology.orchestra.run.vm01.stdout:VERSION="9" 2026-03-10T06:36:43.083 INFO:teuthology.orchestra.run.vm01.stdout:ID="centos" 2026-03-10T06:36:43.084 INFO:teuthology.orchestra.run.vm01.stdout:ID_LIKE="rhel fedora" 2026-03-10T06:36:43.084 INFO:teuthology.orchestra.run.vm01.stdout:VERSION_ID="9" 2026-03-10T06:36:43.084 INFO:teuthology.orchestra.run.vm01.stdout:PLATFORM_ID="platform:el9" 2026-03-10T06:36:43.084 INFO:teuthology.orchestra.run.vm01.stdout:PRETTY_NAME="CentOS Stream 9" 2026-03-10T06:36:43.084 INFO:teuthology.orchestra.run.vm01.stdout:ANSI_COLOR="0;31" 2026-03-10T06:36:43.084 INFO:teuthology.orchestra.run.vm01.stdout:LOGO="fedora-logo-icon" 2026-03-10T06:36:43.084 INFO:teuthology.orchestra.run.vm01.stdout:CPE_NAME="cpe:/o:centos:centos:9" 2026-03-10T06:36:43.084 INFO:teuthology.orchestra.run.vm01.stdout:HOME_URL="https://centos.org/" 2026-03-10T06:36:43.084 INFO:teuthology.orchestra.run.vm01.stdout:BUG_REPORT_URL="https://issues.redhat.com/" 2026-03-10T06:36:43.084 INFO:teuthology.orchestra.run.vm01.stdout:REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9" 2026-03-10T06:36:43.084 INFO:teuthology.orchestra.run.vm01.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream" 2026-03-10T06:36:43.084 INFO:teuthology.lock.ops:Updating vm01.local on lock server 2026-03-10T06:36:43.088 DEBUG:teuthology.orchestra.run.vm08:> uname -m 2026-03-10T06:36:43.103 INFO:teuthology.orchestra.run.vm08.stdout:x86_64 2026-03-10T06:36:43.103 DEBUG:teuthology.orchestra.run.vm08:> cat /etc/os-release 2026-03-10T06:36:43.157 INFO:teuthology.orchestra.run.vm08.stdout:NAME="CentOS Stream" 2026-03-10T06:36:43.157 INFO:teuthology.orchestra.run.vm08.stdout:VERSION="9" 2026-03-10T06:36:43.158 INFO:teuthology.orchestra.run.vm08.stdout:ID="centos" 2026-03-10T06:36:43.158 INFO:teuthology.orchestra.run.vm08.stdout:ID_LIKE="rhel fedora" 2026-03-10T06:36:43.158 INFO:teuthology.orchestra.run.vm08.stdout:VERSION_ID="9" 2026-03-10T06:36:43.158 INFO:teuthology.orchestra.run.vm08.stdout:PLATFORM_ID="platform:el9" 2026-03-10T06:36:43.158 INFO:teuthology.orchestra.run.vm08.stdout:PRETTY_NAME="CentOS Stream 9" 2026-03-10T06:36:43.158 INFO:teuthology.orchestra.run.vm08.stdout:ANSI_COLOR="0;31" 2026-03-10T06:36:43.158 INFO:teuthology.orchestra.run.vm08.stdout:LOGO="fedora-logo-icon" 2026-03-10T06:36:43.158 INFO:teuthology.orchestra.run.vm08.stdout:CPE_NAME="cpe:/o:centos:centos:9" 2026-03-10T06:36:43.158 INFO:teuthology.orchestra.run.vm08.stdout:HOME_URL="https://centos.org/" 2026-03-10T06:36:43.158 INFO:teuthology.orchestra.run.vm08.stdout:BUG_REPORT_URL="https://issues.redhat.com/" 2026-03-10T06:36:43.158 INFO:teuthology.orchestra.run.vm08.stdout:REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9" 2026-03-10T06:36:43.158 INFO:teuthology.orchestra.run.vm08.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream" 2026-03-10T06:36:43.158 INFO:teuthology.lock.ops:Updating vm08.local on lock server 2026-03-10T06:36:43.162 DEBUG:teuthology.orchestra.run.vm09:> uname -m 2026-03-10T06:36:43.176 INFO:teuthology.orchestra.run.vm09.stdout:x86_64 2026-03-10T06:36:43.176 DEBUG:teuthology.orchestra.run.vm09:> cat /etc/os-release 2026-03-10T06:36:43.230 INFO:teuthology.orchestra.run.vm09.stdout:NAME="CentOS Stream" 2026-03-10T06:36:43.230 INFO:teuthology.orchestra.run.vm09.stdout:VERSION="9" 2026-03-10T06:36:43.230 INFO:teuthology.orchestra.run.vm09.stdout:ID="centos" 2026-03-10T06:36:43.230 INFO:teuthology.orchestra.run.vm09.stdout:ID_LIKE="rhel fedora" 2026-03-10T06:36:43.230 INFO:teuthology.orchestra.run.vm09.stdout:VERSION_ID="9" 2026-03-10T06:36:43.230 INFO:teuthology.orchestra.run.vm09.stdout:PLATFORM_ID="platform:el9" 2026-03-10T06:36:43.230 INFO:teuthology.orchestra.run.vm09.stdout:PRETTY_NAME="CentOS Stream 9" 2026-03-10T06:36:43.230 INFO:teuthology.orchestra.run.vm09.stdout:ANSI_COLOR="0;31" 2026-03-10T06:36:43.230 INFO:teuthology.orchestra.run.vm09.stdout:LOGO="fedora-logo-icon" 2026-03-10T06:36:43.230 INFO:teuthology.orchestra.run.vm09.stdout:CPE_NAME="cpe:/o:centos:centos:9" 2026-03-10T06:36:43.230 INFO:teuthology.orchestra.run.vm09.stdout:HOME_URL="https://centos.org/" 2026-03-10T06:36:43.230 INFO:teuthology.orchestra.run.vm09.stdout:BUG_REPORT_URL="https://issues.redhat.com/" 2026-03-10T06:36:43.230 INFO:teuthology.orchestra.run.vm09.stdout:REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9" 2026-03-10T06:36:43.230 INFO:teuthology.orchestra.run.vm09.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream" 2026-03-10T06:36:43.230 INFO:teuthology.lock.ops:Updating vm09.local on lock server 2026-03-10T06:36:43.234 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-03-10T06:36:43.236 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-03-10T06:36:43.237 INFO:teuthology.task.internal:Checking for old test directory... 2026-03-10T06:36:43.237 DEBUG:teuthology.orchestra.run.vm01:> test '!' -e /home/ubuntu/cephtest 2026-03-10T06:36:43.239 DEBUG:teuthology.orchestra.run.vm08:> test '!' -e /home/ubuntu/cephtest 2026-03-10T06:36:43.240 DEBUG:teuthology.orchestra.run.vm09:> test '!' -e /home/ubuntu/cephtest 2026-03-10T06:36:43.285 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-03-10T06:36:43.286 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-03-10T06:36:43.286 DEBUG:teuthology.orchestra.run.vm01:> test -z $(ls -A /var/lib/ceph) 2026-03-10T06:36:43.293 DEBUG:teuthology.orchestra.run.vm08:> test -z $(ls -A /var/lib/ceph) 2026-03-10T06:36:43.295 DEBUG:teuthology.orchestra.run.vm09:> test -z $(ls -A /var/lib/ceph) 2026-03-10T06:36:43.306 INFO:teuthology.orchestra.run.vm01.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-10T06:36:43.308 INFO:teuthology.orchestra.run.vm08.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-10T06:36:43.338 INFO:teuthology.orchestra.run.vm09.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-10T06:36:43.339 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-03-10T06:36:43.347 DEBUG:teuthology.orchestra.run.vm01:> test -e /ceph-qa-ready 2026-03-10T06:36:43.361 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T06:36:43.575 DEBUG:teuthology.orchestra.run.vm08:> test -e /ceph-qa-ready 2026-03-10T06:36:43.589 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T06:36:43.774 DEBUG:teuthology.orchestra.run.vm09:> test -e /ceph-qa-ready 2026-03-10T06:36:43.788 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T06:36:43.966 INFO:teuthology.run_tasks:Running task internal.base... 2026-03-10T06:36:43.967 INFO:teuthology.task.internal:Creating test directory... 2026-03-10T06:36:43.967 DEBUG:teuthology.orchestra.run.vm01:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-10T06:36:43.969 DEBUG:teuthology.orchestra.run.vm08:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-10T06:36:43.971 DEBUG:teuthology.orchestra.run.vm09:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-10T06:36:43.985 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-03-10T06:36:43.986 INFO:teuthology.run_tasks:Running task internal.archive... 2026-03-10T06:36:43.987 INFO:teuthology.task.internal:Creating archive directory... 2026-03-10T06:36:43.987 DEBUG:teuthology.orchestra.run.vm01:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-10T06:36:44.027 DEBUG:teuthology.orchestra.run.vm08:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-10T06:36:44.029 DEBUG:teuthology.orchestra.run.vm09:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-10T06:36:44.045 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-03-10T06:36:44.046 INFO:teuthology.task.internal:Enabling coredump saving... 2026-03-10T06:36:44.046 DEBUG:teuthology.orchestra.run.vm01:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-10T06:36:44.100 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T06:36:44.100 DEBUG:teuthology.orchestra.run.vm08:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-10T06:36:44.113 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T06:36:44.113 DEBUG:teuthology.orchestra.run.vm09:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-10T06:36:44.127 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T06:36:44.127 DEBUG:teuthology.orchestra.run.vm01:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-10T06:36:44.143 DEBUG:teuthology.orchestra.run.vm08:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-10T06:36:44.155 DEBUG:teuthology.orchestra.run.vm09:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-10T06:36:44.167 INFO:teuthology.orchestra.run.vm01.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T06:36:44.176 INFO:teuthology.orchestra.run.vm08.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T06:36:44.177 INFO:teuthology.orchestra.run.vm01.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T06:36:44.184 INFO:teuthology.orchestra.run.vm08.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T06:36:44.191 INFO:teuthology.orchestra.run.vm09.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T06:36:44.199 INFO:teuthology.orchestra.run.vm09.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T06:36:44.200 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-03-10T06:36:44.202 INFO:teuthology.task.internal:Configuring sudo... 2026-03-10T06:36:44.202 DEBUG:teuthology.orchestra.run.vm01:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-10T06:36:44.220 DEBUG:teuthology.orchestra.run.vm08:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-10T06:36:44.228 DEBUG:teuthology.orchestra.run.vm09:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-10T06:36:44.264 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-03-10T06:36:44.266 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-03-10T06:36:44.266 DEBUG:teuthology.orchestra.run.vm01:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-10T06:36:44.286 DEBUG:teuthology.orchestra.run.vm08:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-10T06:36:44.290 DEBUG:teuthology.orchestra.run.vm09:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-10T06:36:44.318 DEBUG:teuthology.orchestra.run.vm01:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T06:36:44.365 DEBUG:teuthology.orchestra.run.vm01:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T06:36:44.422 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T06:36:44.422 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-10T06:36:44.482 DEBUG:teuthology.orchestra.run.vm08:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T06:36:44.503 DEBUG:teuthology.orchestra.run.vm08:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T06:36:44.557 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T06:36:44.557 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-10T06:36:44.615 DEBUG:teuthology.orchestra.run.vm09:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T06:36:44.635 DEBUG:teuthology.orchestra.run.vm09:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T06:36:44.690 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T06:36:44.690 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-10T06:36:44.748 DEBUG:teuthology.orchestra.run.vm01:> sudo service rsyslog restart 2026-03-10T06:36:44.749 DEBUG:teuthology.orchestra.run.vm08:> sudo service rsyslog restart 2026-03-10T06:36:44.751 DEBUG:teuthology.orchestra.run.vm09:> sudo service rsyslog restart 2026-03-10T06:36:44.778 INFO:teuthology.orchestra.run.vm08.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T06:36:44.779 INFO:teuthology.orchestra.run.vm01.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T06:36:44.813 INFO:teuthology.orchestra.run.vm09.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T06:36:45.139 INFO:teuthology.run_tasks:Running task internal.timer... 2026-03-10T06:36:45.140 INFO:teuthology.task.internal:Starting timer... 2026-03-10T06:36:45.140 INFO:teuthology.run_tasks:Running task pcp... 2026-03-10T06:36:45.143 INFO:teuthology.run_tasks:Running task selinux... 2026-03-10T06:36:45.145 DEBUG:teuthology.task:Applying overrides for task selinux: {'allowlist': ['scontext=system_u:system_r:logrotate_t:s0', 'scontext=system_u:system_r:getty_t:s0']} 2026-03-10T06:36:45.145 INFO:teuthology.task.selinux:Excluding vm01: VMs are not yet supported 2026-03-10T06:36:45.145 INFO:teuthology.task.selinux:Excluding vm08: VMs are not yet supported 2026-03-10T06:36:45.145 INFO:teuthology.task.selinux:Excluding vm09: VMs are not yet supported 2026-03-10T06:36:45.145 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-03-10T06:36:45.145 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-03-10T06:36:45.146 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-03-10T06:36:45.146 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-03-10T06:36:45.147 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'timezone': 'UTC'}} 2026-03-10T06:36:45.147 DEBUG:teuthology.repo_utils:Setting repo remote to https://github.com/ceph/ceph-cm-ansible.git 2026-03-10T06:36:45.149 INFO:teuthology.repo_utils:Fetching github.com_ceph_ceph-cm-ansible_main from origin 2026-03-10T06:36:45.748 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main to origin/main 2026-03-10T06:36:45.754 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-03-10T06:36:45.754 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "timezone": "UTC"}' -i /tmp/teuth_ansible_inventory4l_xcmqo --limit vm01.local,vm08.local,vm09.local /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-03-10T06:38:31.827 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm01.local'), Remote(name='ubuntu@vm08.local'), Remote(name='ubuntu@vm09.local')] 2026-03-10T06:38:31.828 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm01.local' 2026-03-10T06:38:31.828 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm01.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T06:38:31.891 DEBUG:teuthology.orchestra.run.vm01:> true 2026-03-10T06:38:31.964 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm01.local' 2026-03-10T06:38:31.964 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm08.local' 2026-03-10T06:38:31.964 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm08.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T06:38:32.027 DEBUG:teuthology.orchestra.run.vm08:> true 2026-03-10T06:38:32.107 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm08.local' 2026-03-10T06:38:32.107 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm09.local' 2026-03-10T06:38:32.107 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm09.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T06:38:32.172 DEBUG:teuthology.orchestra.run.vm09:> true 2026-03-10T06:38:32.247 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm09.local' 2026-03-10T06:38:32.247 INFO:teuthology.run_tasks:Running task clock... 2026-03-10T06:38:32.249 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-03-10T06:38:32.249 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-10T06:38:32.250 DEBUG:teuthology.orchestra.run.vm01:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T06:38:32.251 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-10T06:38:32.251 DEBUG:teuthology.orchestra.run.vm08:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T06:38:32.253 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-10T06:38:32.253 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T06:38:32.286 INFO:teuthology.orchestra.run.vm08.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-03-10T06:38:32.295 INFO:teuthology.orchestra.run.vm01.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-03-10T06:38:32.305 INFO:teuthology.orchestra.run.vm08.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-03-10T06:38:32.313 INFO:teuthology.orchestra.run.vm01.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-03-10T06:38:32.325 INFO:teuthology.orchestra.run.vm09.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-03-10T06:38:32.332 INFO:teuthology.orchestra.run.vm08.stderr:sudo: ntpd: command not found 2026-03-10T06:38:32.340 INFO:teuthology.orchestra.run.vm09.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-03-10T06:38:32.343 INFO:teuthology.orchestra.run.vm08.stdout:506 Cannot talk to daemon 2026-03-10T06:38:32.343 INFO:teuthology.orchestra.run.vm01.stderr:sudo: ntpd: command not found 2026-03-10T06:38:32.355 INFO:teuthology.orchestra.run.vm01.stdout:506 Cannot talk to daemon 2026-03-10T06:38:32.357 INFO:teuthology.orchestra.run.vm08.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-03-10T06:38:32.368 INFO:teuthology.orchestra.run.vm09.stderr:sudo: ntpd: command not found 2026-03-10T06:38:32.373 INFO:teuthology.orchestra.run.vm08.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-03-10T06:38:32.373 INFO:teuthology.orchestra.run.vm01.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-03-10T06:38:32.378 INFO:teuthology.orchestra.run.vm09.stdout:506 Cannot talk to daemon 2026-03-10T06:38:32.392 INFO:teuthology.orchestra.run.vm09.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-03-10T06:38:32.393 INFO:teuthology.orchestra.run.vm01.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-03-10T06:38:32.411 INFO:teuthology.orchestra.run.vm09.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-03-10T06:38:32.425 INFO:teuthology.orchestra.run.vm08.stderr:bash: line 1: ntpq: command not found 2026-03-10T06:38:32.428 INFO:teuthology.orchestra.run.vm08.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T06:38:32.428 INFO:teuthology.orchestra.run.vm08.stdout:=============================================================================== 2026-03-10T06:38:32.446 INFO:teuthology.orchestra.run.vm01.stderr:bash: line 1: ntpq: command not found 2026-03-10T06:38:32.448 INFO:teuthology.orchestra.run.vm01.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T06:38:32.448 INFO:teuthology.orchestra.run.vm01.stdout:=============================================================================== 2026-03-10T06:38:32.468 INFO:teuthology.orchestra.run.vm09.stderr:bash: line 1: ntpq: command not found 2026-03-10T06:38:32.471 INFO:teuthology.orchestra.run.vm09.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T06:38:32.471 INFO:teuthology.orchestra.run.vm09.stdout:=============================================================================== 2026-03-10T06:38:32.472 INFO:teuthology.run_tasks:Running task pexec... 2026-03-10T06:38:32.475 INFO:teuthology.task.pexec:Executing custom commands... 2026-03-10T06:38:32.475 DEBUG:teuthology.orchestra.run.vm01:> TESTDIR=/home/ubuntu/cephtest bash -s 2026-03-10T06:38:32.475 DEBUG:teuthology.orchestra.run.vm08:> TESTDIR=/home/ubuntu/cephtest bash -s 2026-03-10T06:38:32.475 DEBUG:teuthology.orchestra.run.vm09:> TESTDIR=/home/ubuntu/cephtest bash -s 2026-03-10T06:38:32.477 DEBUG:teuthology.task.pexec:ubuntu@vm08.local< sudo dnf remove nvme-cli -y 2026-03-10T06:38:32.477 DEBUG:teuthology.task.pexec:ubuntu@vm08.local< sudo dnf install nvmetcli nvme-cli -y 2026-03-10T06:38:32.477 INFO:teuthology.task.pexec:Running commands on host ubuntu@vm08.local 2026-03-10T06:38:32.477 INFO:teuthology.task.pexec:sudo dnf remove nvme-cli -y 2026-03-10T06:38:32.477 INFO:teuthology.task.pexec:sudo dnf install nvmetcli nvme-cli -y 2026-03-10T06:38:32.477 DEBUG:teuthology.task.pexec:ubuntu@vm01.local< sudo dnf remove nvme-cli -y 2026-03-10T06:38:32.477 DEBUG:teuthology.task.pexec:ubuntu@vm01.local< sudo dnf install nvmetcli nvme-cli -y 2026-03-10T06:38:32.477 INFO:teuthology.task.pexec:Running commands on host ubuntu@vm01.local 2026-03-10T06:38:32.477 INFO:teuthology.task.pexec:sudo dnf remove nvme-cli -y 2026-03-10T06:38:32.477 INFO:teuthology.task.pexec:sudo dnf install nvmetcli nvme-cli -y 2026-03-10T06:38:32.515 DEBUG:teuthology.task.pexec:ubuntu@vm09.local< sudo dnf remove nvme-cli -y 2026-03-10T06:38:32.516 DEBUG:teuthology.task.pexec:ubuntu@vm09.local< sudo dnf install nvmetcli nvme-cli -y 2026-03-10T06:38:32.516 INFO:teuthology.task.pexec:Running commands on host ubuntu@vm09.local 2026-03-10T06:38:32.516 INFO:teuthology.task.pexec:sudo dnf remove nvme-cli -y 2026-03-10T06:38:32.516 INFO:teuthology.task.pexec:sudo dnf install nvmetcli nvme-cli -y 2026-03-10T06:38:32.676 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: nvme-cli 2026-03-10T06:38:32.676 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T06:38:32.679 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:38:32.680 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T06:38:32.680 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:38:32.698 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: nvme-cli 2026-03-10T06:38:32.698 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T06:38:32.701 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T06:38:32.702 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T06:38:32.702 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T06:38:32.758 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: nvme-cli 2026-03-10T06:38:32.758 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T06:38:32.761 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T06:38:32.761 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T06:38:32.761 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T06:38:33.078 INFO:teuthology.orchestra.run.vm08.stdout:Last metadata expiration check: 0:01:14 ago on Tue 10 Mar 2026 06:37:19 AM UTC. 2026-03-10T06:38:33.178 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:38:33.178 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:38:33.178 INFO:teuthology.orchestra.run.vm08.stdout: Package Architecture Version Repository Size 2026-03-10T06:38:33.178 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:38:33.178 INFO:teuthology.orchestra.run.vm08.stdout:Installing: 2026-03-10T06:38:33.178 INFO:teuthology.orchestra.run.vm08.stdout: nvme-cli x86_64 2.16-1.el9 baseos 1.2 M 2026-03-10T06:38:33.178 INFO:teuthology.orchestra.run.vm08.stdout: nvmetcli noarch 0.8-3.el9 baseos 44 k 2026-03-10T06:38:33.178 INFO:teuthology.orchestra.run.vm08.stdout:Installing dependencies: 2026-03-10T06:38:33.178 INFO:teuthology.orchestra.run.vm08.stdout: python3-configshell noarch 1:1.1.30-1.el9 baseos 72 k 2026-03-10T06:38:33.178 INFO:teuthology.orchestra.run.vm08.stdout: python3-kmod x86_64 0.9-32.el9 baseos 84 k 2026-03-10T06:38:33.178 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyparsing noarch 2.4.7-9.el9 baseos 150 k 2026-03-10T06:38:33.178 INFO:teuthology.orchestra.run.vm08.stdout: python3-urwid x86_64 2.1.2-4.el9 baseos 837 k 2026-03-10T06:38:33.178 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:38:33.178 INFO:teuthology.orchestra.run.vm08.stdout:Transaction Summary 2026-03-10T06:38:33.178 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:38:33.178 INFO:teuthology.orchestra.run.vm08.stdout:Install 6 Packages 2026-03-10T06:38:33.178 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:38:33.179 INFO:teuthology.orchestra.run.vm08.stdout:Total download size: 2.3 M 2026-03-10T06:38:33.179 INFO:teuthology.orchestra.run.vm08.stdout:Installed size: 11 M 2026-03-10T06:38:33.179 INFO:teuthology.orchestra.run.vm08.stdout:Downloading Packages: 2026-03-10T06:38:33.197 INFO:teuthology.orchestra.run.vm01.stdout:Last metadata expiration check: 0:01:11 ago on Tue 10 Mar 2026 06:37:22 AM UTC. 2026-03-10T06:38:33.273 INFO:teuthology.orchestra.run.vm09.stdout:Last metadata expiration check: 0:01:01 ago on Tue 10 Mar 2026 06:37:32 AM UTC. 2026-03-10T06:38:33.319 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T06:38:33.320 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T06:38:33.320 INFO:teuthology.orchestra.run.vm01.stdout: Package Architecture Version Repository Size 2026-03-10T06:38:33.320 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T06:38:33.320 INFO:teuthology.orchestra.run.vm01.stdout:Installing: 2026-03-10T06:38:33.320 INFO:teuthology.orchestra.run.vm01.stdout: nvme-cli x86_64 2.16-1.el9 baseos 1.2 M 2026-03-10T06:38:33.320 INFO:teuthology.orchestra.run.vm01.stdout: nvmetcli noarch 0.8-3.el9 baseos 44 k 2026-03-10T06:38:33.320 INFO:teuthology.orchestra.run.vm01.stdout:Installing dependencies: 2026-03-10T06:38:33.320 INFO:teuthology.orchestra.run.vm01.stdout: python3-configshell noarch 1:1.1.30-1.el9 baseos 72 k 2026-03-10T06:38:33.320 INFO:teuthology.orchestra.run.vm01.stdout: python3-kmod x86_64 0.9-32.el9 baseos 84 k 2026-03-10T06:38:33.320 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyparsing noarch 2.4.7-9.el9 baseos 150 k 2026-03-10T06:38:33.320 INFO:teuthology.orchestra.run.vm01.stdout: python3-urwid x86_64 2.1.2-4.el9 baseos 837 k 2026-03-10T06:38:33.320 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:38:33.320 INFO:teuthology.orchestra.run.vm01.stdout:Transaction Summary 2026-03-10T06:38:33.320 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T06:38:33.320 INFO:teuthology.orchestra.run.vm01.stdout:Install 6 Packages 2026-03-10T06:38:33.320 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:38:33.320 INFO:teuthology.orchestra.run.vm01.stdout:Total download size: 2.3 M 2026-03-10T06:38:33.320 INFO:teuthology.orchestra.run.vm01.stdout:Installed size: 11 M 2026-03-10T06:38:33.320 INFO:teuthology.orchestra.run.vm01.stdout:Downloading Packages: 2026-03-10T06:38:33.399 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T06:38:33.400 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T06:38:33.400 INFO:teuthology.orchestra.run.vm09.stdout: Package Architecture Version Repository Size 2026-03-10T06:38:33.400 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T06:38:33.400 INFO:teuthology.orchestra.run.vm09.stdout:Installing: 2026-03-10T06:38:33.400 INFO:teuthology.orchestra.run.vm09.stdout: nvme-cli x86_64 2.16-1.el9 baseos 1.2 M 2026-03-10T06:38:33.400 INFO:teuthology.orchestra.run.vm09.stdout: nvmetcli noarch 0.8-3.el9 baseos 44 k 2026-03-10T06:38:33.400 INFO:teuthology.orchestra.run.vm09.stdout:Installing dependencies: 2026-03-10T06:38:33.400 INFO:teuthology.orchestra.run.vm09.stdout: python3-configshell noarch 1:1.1.30-1.el9 baseos 72 k 2026-03-10T06:38:33.400 INFO:teuthology.orchestra.run.vm09.stdout: python3-kmod x86_64 0.9-32.el9 baseos 84 k 2026-03-10T06:38:33.400 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyparsing noarch 2.4.7-9.el9 baseos 150 k 2026-03-10T06:38:33.400 INFO:teuthology.orchestra.run.vm09.stdout: python3-urwid x86_64 2.1.2-4.el9 baseos 837 k 2026-03-10T06:38:33.400 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:38:33.400 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-10T06:38:33.400 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T06:38:33.400 INFO:teuthology.orchestra.run.vm09.stdout:Install 6 Packages 2026-03-10T06:38:33.400 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:38:33.400 INFO:teuthology.orchestra.run.vm09.stdout:Total download size: 2.3 M 2026-03-10T06:38:33.400 INFO:teuthology.orchestra.run.vm09.stdout:Installed size: 11 M 2026-03-10T06:38:33.400 INFO:teuthology.orchestra.run.vm09.stdout:Downloading Packages: 2026-03-10T06:38:33.491 INFO:teuthology.orchestra.run.vm08.stdout:(1/6): nvmetcli-0.8-3.el9.noarch.rpm 206 kB/s | 44 kB 00:00 2026-03-10T06:38:33.520 INFO:teuthology.orchestra.run.vm08.stdout:(2/6): python3-configshell-1.1.30-1.el9.noarch. 298 kB/s | 72 kB 00:00 2026-03-10T06:38:33.629 INFO:teuthology.orchestra.run.vm08.stdout:(3/6): nvme-cli-2.16-1.el9.x86_64.rpm 3.3 MB/s | 1.2 MB 00:00 2026-03-10T06:38:33.630 INFO:teuthology.orchestra.run.vm08.stdout:(4/6): python3-kmod-0.9-32.el9.x86_64.rpm 605 kB/s | 84 kB 00:00 2026-03-10T06:38:33.632 INFO:teuthology.orchestra.run.vm09.stdout:(1/6): nvmetcli-0.8-3.el9.noarch.rpm 386 kB/s | 44 kB 00:00 2026-03-10T06:38:33.634 INFO:teuthology.orchestra.run.vm08.stdout:(5/6): python3-pyparsing-2.4.7-9.el9.noarch.rpm 1.3 MB/s | 150 kB 00:00 2026-03-10T06:38:33.654 INFO:teuthology.orchestra.run.vm09.stdout:(2/6): python3-configshell-1.1.30-1.el9.noarch. 529 kB/s | 72 kB 00:00 2026-03-10T06:38:33.684 INFO:teuthology.orchestra.run.vm01.stdout:(1/6): nvmetcli-0.8-3.el9.noarch.rpm 183 kB/s | 44 kB 00:00 2026-03-10T06:38:33.704 INFO:teuthology.orchestra.run.vm08.stdout:(6/6): python3-urwid-2.1.2-4.el9.x86_64.rpm 11 MB/s | 837 kB 00:00 2026-03-10T06:38:33.705 INFO:teuthology.orchestra.run.vm01.stdout:(2/6): python3-configshell-1.1.30-1.el9.noarch. 275 kB/s | 72 kB 00:00 2026-03-10T06:38:33.706 INFO:teuthology.orchestra.run.vm08.stdout:-------------------------------------------------------------------------------- 2026-03-10T06:38:33.706 INFO:teuthology.orchestra.run.vm08.stdout:Total 4.4 MB/s | 2.3 MB 00:00 2026-03-10T06:38:33.734 INFO:teuthology.orchestra.run.vm09.stdout:(3/6): python3-kmod-0.9-32.el9.x86_64.rpm 827 kB/s | 84 kB 00:00 2026-03-10T06:38:33.755 INFO:teuthology.orchestra.run.vm09.stdout:(4/6): nvme-cli-2.16-1.el9.x86_64.rpm 4.9 MB/s | 1.2 MB 00:00 2026-03-10T06:38:33.756 INFO:teuthology.orchestra.run.vm09.stdout:(5/6): python3-pyparsing-2.4.7-9.el9.noarch.rpm 1.4 MB/s | 150 kB 00:00 2026-03-10T06:38:33.765 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction check 2026-03-10T06:38:33.772 INFO:teuthology.orchestra.run.vm08.stdout:Transaction check succeeded. 2026-03-10T06:38:33.772 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction test 2026-03-10T06:38:33.801 INFO:teuthology.orchestra.run.vm09.stdout:(6/6): python3-urwid-2.1.2-4.el9.x86_64.rpm 13 MB/s | 837 kB 00:00 2026-03-10T06:38:33.801 INFO:teuthology.orchestra.run.vm09.stdout:-------------------------------------------------------------------------------- 2026-03-10T06:38:33.801 INFO:teuthology.orchestra.run.vm09.stdout:Total 5.8 MB/s | 2.3 MB 00:00 2026-03-10T06:38:33.815 INFO:teuthology.orchestra.run.vm01.stdout:(3/6): python3-kmod-0.9-32.el9.x86_64.rpm 643 kB/s | 84 kB 00:00 2026-03-10T06:38:33.821 INFO:teuthology.orchestra.run.vm08.stdout:Transaction test succeeded. 2026-03-10T06:38:33.821 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction 2026-03-10T06:38:33.850 INFO:teuthology.orchestra.run.vm01.stdout:(4/6): python3-pyparsing-2.4.7-9.el9.noarch.rpm 1.0 MB/s | 150 kB 00:00 2026-03-10T06:38:33.876 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-10T06:38:33.884 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-10T06:38:33.884 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-10T06:38:33.898 INFO:teuthology.orchestra.run.vm01.stdout:(5/6): nvme-cli-2.16-1.el9.x86_64.rpm 2.5 MB/s | 1.2 MB 00:00 2026-03-10T06:38:33.946 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-10T06:38:33.946 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-10T06:38:33.965 INFO:teuthology.orchestra.run.vm08.stdout: Preparing : 1/1 2026-03-10T06:38:33.975 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-urwid-2.1.2-4.el9.x86_64 1/6 2026-03-10T06:38:33.986 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-pyparsing-2.4.7-9.el9.noarch 2/6 2026-03-10T06:38:33.993 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-configshell-1:1.1.30-1.el9.noarch 3/6 2026-03-10T06:38:34.000 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-kmod-0.9-32.el9.x86_64 4/6 2026-03-10T06:38:34.002 INFO:teuthology.orchestra.run.vm08.stdout: Installing : nvmetcli-0.8-3.el9.noarch 5/6 2026-03-10T06:38:34.028 INFO:teuthology.orchestra.run.vm01.stdout:(6/6): python3-urwid-2.1.2-4.el9.x86_64.rpm 3.8 MB/s | 837 kB 00:00 2026-03-10T06:38:34.028 INFO:teuthology.orchestra.run.vm01.stdout:-------------------------------------------------------------------------------- 2026-03-10T06:38:34.028 INFO:teuthology.orchestra.run.vm01.stdout:Total 3.3 MB/s | 2.3 MB 00:00 2026-03-10T06:38:34.103 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction check 2026-03-10T06:38:34.114 INFO:teuthology.orchestra.run.vm01.stdout:Transaction check succeeded. 2026-03-10T06:38:34.114 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction test 2026-03-10T06:38:34.138 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-10T06:38:34.152 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-urwid-2.1.2-4.el9.x86_64 1/6 2026-03-10T06:38:34.156 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: nvmetcli-0.8-3.el9.noarch 5/6 2026-03-10T06:38:34.161 INFO:teuthology.orchestra.run.vm08.stdout: Installing : nvme-cli-2.16-1.el9.x86_64 6/6 2026-03-10T06:38:34.166 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-pyparsing-2.4.7-9.el9.noarch 2/6 2026-03-10T06:38:34.175 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-configshell-1:1.1.30-1.el9.noarch 3/6 2026-03-10T06:38:34.184 INFO:teuthology.orchestra.run.vm01.stdout:Transaction test succeeded. 2026-03-10T06:38:34.184 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction 2026-03-10T06:38:34.187 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-kmod-0.9-32.el9.x86_64 4/6 2026-03-10T06:38:34.189 INFO:teuthology.orchestra.run.vm09.stdout: Installing : nvmetcli-0.8-3.el9.noarch 5/6 2026-03-10T06:38:34.365 INFO:teuthology.orchestra.run.vm01.stdout: Preparing : 1/1 2026-03-10T06:38:34.379 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-urwid-2.1.2-4.el9.x86_64 1/6 2026-03-10T06:38:34.379 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: nvmetcli-0.8-3.el9.noarch 5/6 2026-03-10T06:38:34.387 INFO:teuthology.orchestra.run.vm09.stdout: Installing : nvme-cli-2.16-1.el9.x86_64 6/6 2026-03-10T06:38:34.393 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-pyparsing-2.4.7-9.el9.noarch 2/6 2026-03-10T06:38:34.401 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-configshell-1:1.1.30-1.el9.noarch 3/6 2026-03-10T06:38:34.411 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-kmod-0.9-32.el9.x86_64 4/6 2026-03-10T06:38:34.413 INFO:teuthology.orchestra.run.vm01.stdout: Installing : nvmetcli-0.8-3.el9.noarch 5/6 2026-03-10T06:38:34.537 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: nvme-cli-2.16-1.el9.x86_64 6/6 2026-03-10T06:38:34.537 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /usr/lib/systemd/system/nvmefc-boot-connections.service. 2026-03-10T06:38:34.537 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:38:34.600 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: nvmetcli-0.8-3.el9.noarch 5/6 2026-03-10T06:38:34.606 INFO:teuthology.orchestra.run.vm01.stdout: Installing : nvme-cli-2.16-1.el9.x86_64 6/6 2026-03-10T06:38:34.813 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: nvme-cli-2.16-1.el9.x86_64 6/6 2026-03-10T06:38:34.813 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /usr/lib/systemd/system/nvmefc-boot-connections.service. 2026-03-10T06:38:34.813 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:38:35.030 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: nvme-cli-2.16-1.el9.x86_64 6/6 2026-03-10T06:38:35.031 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /usr/lib/systemd/system/nvmefc-boot-connections.service. 2026-03-10T06:38:35.031 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:38:35.136 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : nvme-cli-2.16-1.el9.x86_64 1/6 2026-03-10T06:38:35.136 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : nvmetcli-0.8-3.el9.noarch 2/6 2026-03-10T06:38:35.136 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-configshell-1:1.1.30-1.el9.noarch 3/6 2026-03-10T06:38:35.136 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-kmod-0.9-32.el9.x86_64 4/6 2026-03-10T06:38:35.136 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.noarch 5/6 2026-03-10T06:38:35.218 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-urwid-2.1.2-4.el9.x86_64 6/6 2026-03-10T06:38:35.218 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:38:35.218 INFO:teuthology.orchestra.run.vm08.stdout:Installed: 2026-03-10T06:38:35.218 INFO:teuthology.orchestra.run.vm08.stdout: nvme-cli-2.16-1.el9.x86_64 nvmetcli-0.8-3.el9.noarch 2026-03-10T06:38:35.218 INFO:teuthology.orchestra.run.vm08.stdout: python3-configshell-1:1.1.30-1.el9.noarch python3-kmod-0.9-32.el9.x86_64 2026-03-10T06:38:35.218 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyparsing-2.4.7-9.el9.noarch python3-urwid-2.1.2-4.el9.x86_64 2026-03-10T06:38:35.218 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:38:35.218 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:38:35.284 DEBUG:teuthology.parallel:result is None 2026-03-10T06:38:35.436 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : nvme-cli-2.16-1.el9.x86_64 1/6 2026-03-10T06:38:35.436 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : nvmetcli-0.8-3.el9.noarch 2/6 2026-03-10T06:38:35.436 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-configshell-1:1.1.30-1.el9.noarch 3/6 2026-03-10T06:38:35.436 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-kmod-0.9-32.el9.x86_64 4/6 2026-03-10T06:38:35.436 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.noarch 5/6 2026-03-10T06:38:35.542 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-urwid-2.1.2-4.el9.x86_64 6/6 2026-03-10T06:38:35.543 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:38:35.543 INFO:teuthology.orchestra.run.vm09.stdout:Installed: 2026-03-10T06:38:35.543 INFO:teuthology.orchestra.run.vm09.stdout: nvme-cli-2.16-1.el9.x86_64 nvmetcli-0.8-3.el9.noarch 2026-03-10T06:38:35.543 INFO:teuthology.orchestra.run.vm09.stdout: python3-configshell-1:1.1.30-1.el9.noarch python3-kmod-0.9-32.el9.x86_64 2026-03-10T06:38:35.543 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyparsing-2.4.7-9.el9.noarch python3-urwid-2.1.2-4.el9.x86_64 2026-03-10T06:38:35.543 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:38:35.543 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T06:38:35.608 DEBUG:teuthology.parallel:result is None 2026-03-10T06:38:35.681 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : nvme-cli-2.16-1.el9.x86_64 1/6 2026-03-10T06:38:35.681 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : nvmetcli-0.8-3.el9.noarch 2/6 2026-03-10T06:38:35.681 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-configshell-1:1.1.30-1.el9.noarch 3/6 2026-03-10T06:38:35.681 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-kmod-0.9-32.el9.x86_64 4/6 2026-03-10T06:38:35.681 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.noarch 5/6 2026-03-10T06:38:35.768 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-urwid-2.1.2-4.el9.x86_64 6/6 2026-03-10T06:38:35.769 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:38:35.769 INFO:teuthology.orchestra.run.vm01.stdout:Installed: 2026-03-10T06:38:35.769 INFO:teuthology.orchestra.run.vm01.stdout: nvme-cli-2.16-1.el9.x86_64 nvmetcli-0.8-3.el9.noarch 2026-03-10T06:38:35.769 INFO:teuthology.orchestra.run.vm01.stdout: python3-configshell-1:1.1.30-1.el9.noarch python3-kmod-0.9-32.el9.x86_64 2026-03-10T06:38:35.769 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyparsing-2.4.7-9.el9.noarch python3-urwid-2.1.2-4.el9.x86_64 2026-03-10T06:38:35.769 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:38:35.769 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T06:38:35.838 DEBUG:teuthology.parallel:result is None 2026-03-10T06:38:35.838 INFO:teuthology.run_tasks:Running task install... 2026-03-10T06:38:35.840 DEBUG:teuthology.task.install:project ceph 2026-03-10T06:38:35.840 DEBUG:teuthology.task.install:INSTALL overrides: {'ceph': {'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'}, 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-10T06:38:35.841 DEBUG:teuthology.task.install:config {'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-10T06:38:35.841 INFO:teuthology.task.install:Using flavor: default 2026-03-10T06:38:35.843 DEBUG:teuthology.task.install:Package list is: {'deb': ['ceph', 'cephadm', 'ceph-mds', 'ceph-mgr', 'ceph-common', 'ceph-fuse', 'ceph-test', 'ceph-volume', 'radosgw', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'libcephfs2', 'libcephfs-dev', 'librados2', 'librbd1', 'rbd-fuse'], 'rpm': ['ceph-radosgw', 'ceph-test', 'ceph', 'ceph-base', 'cephadm', 'ceph-immutable-object-cache', 'ceph-mgr', 'ceph-mgr-dashboard', 'ceph-mgr-diskprediction-local', 'ceph-mgr-rook', 'ceph-mgr-cephadm', 'ceph-fuse', 'ceph-volume', 'librados-devel', 'libcephfs2', 'libcephfs-devel', 'librados2', 'librbd1', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'rbd-fuse', 'rbd-mirror', 'rbd-nbd']} 2026-03-10T06:38:35.843 INFO:teuthology.task.install:extra packages: [] 2026-03-10T06:38:35.843 DEBUG:teuthology.task.install.rpm:_update_package_list_and_install: config is {'branch': None, 'cleanup': None, 'debuginfo': None, 'downgrade_packages': [], 'exclude_packages': [], 'extra_packages': [], 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}, 'extras': None, 'enable_coprs': [], 'flavor': 'default', 'install_ceph_packages': True, 'packages': {}, 'project': 'ceph', 'repos_only': False, 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'tag': None, 'wait_for_package': False} 2026-03-10T06:38:35.843 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T06:38:35.844 DEBUG:teuthology.task.install.rpm:_update_package_list_and_install: config is {'branch': None, 'cleanup': None, 'debuginfo': None, 'downgrade_packages': [], 'exclude_packages': [], 'extra_packages': [], 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}, 'extras': None, 'enable_coprs': [], 'flavor': 'default', 'install_ceph_packages': True, 'packages': {}, 'project': 'ceph', 'repos_only': False, 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'tag': None, 'wait_for_package': False} 2026-03-10T06:38:35.844 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T06:38:35.844 DEBUG:teuthology.task.install.rpm:_update_package_list_and_install: config is {'branch': None, 'cleanup': None, 'debuginfo': None, 'downgrade_packages': [], 'exclude_packages': [], 'extra_packages': [], 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}, 'extras': None, 'enable_coprs': [], 'flavor': 'default', 'install_ceph_packages': True, 'packages': {}, 'project': 'ceph', 'repos_only': False, 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'tag': None, 'wait_for_package': False} 2026-03-10T06:38:35.844 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T06:38:36.475 INFO:teuthology.task.install.rpm:Pulling from https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/ 2026-03-10T06:38:36.476 INFO:teuthology.task.install.rpm:Package version is 19.2.3-678.ge911bdeb 2026-03-10T06:38:36.539 INFO:teuthology.task.install.rpm:Pulling from https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/ 2026-03-10T06:38:36.539 INFO:teuthology.task.install.rpm:Package version is 19.2.3-678.ge911bdeb 2026-03-10T06:38:36.543 INFO:teuthology.task.install.rpm:Pulling from https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/ 2026-03-10T06:38:36.544 INFO:teuthology.task.install.rpm:Package version is 19.2.3-678.ge911bdeb 2026-03-10T06:38:37.179 INFO:teuthology.packaging:Writing yum repo: [ceph] name=ceph packages for $basearch baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/$basearch enabled=1 gpgcheck=0 type=rpm-md [ceph-noarch] name=ceph noarch packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/noarch enabled=1 gpgcheck=0 type=rpm-md [ceph-source] name=ceph source packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/SRPMS enabled=1 gpgcheck=0 type=rpm-md 2026-03-10T06:38:37.179 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T06:38:37.179 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/etc/yum.repos.d/ceph.repo 2026-03-10T06:38:37.180 INFO:teuthology.packaging:Writing yum repo: [ceph] name=ceph packages for $basearch baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/$basearch enabled=1 gpgcheck=0 type=rpm-md [ceph-noarch] name=ceph noarch packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/noarch enabled=1 gpgcheck=0 type=rpm-md [ceph-source] name=ceph source packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/SRPMS enabled=1 gpgcheck=0 type=rpm-md 2026-03-10T06:38:37.180 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T06:38:37.180 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/etc/yum.repos.d/ceph.repo 2026-03-10T06:38:37.197 INFO:teuthology.packaging:Writing yum repo: [ceph] name=ceph packages for $basearch baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/$basearch enabled=1 gpgcheck=0 type=rpm-md [ceph-noarch] name=ceph noarch packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/noarch enabled=1 gpgcheck=0 type=rpm-md [ceph-source] name=ceph source packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/SRPMS enabled=1 gpgcheck=0 type=rpm-md 2026-03-10T06:38:37.197 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T06:38:37.197 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/yum.repos.d/ceph.repo 2026-03-10T06:38:37.218 INFO:teuthology.task.install.rpm:Installing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd, bzip2, perl-Test-Harness, python3-xmltodict, python3-jmespath on remote rpm x86_64 2026-03-10T06:38:37.218 DEBUG:teuthology.orchestra.run.vm08:> if test -f /etc/yum.repos.d/ceph.repo ; then sudo sed -i -e ':a;N;$!ba;s/enabled=1\ngpg/enabled=1\npriority=1\ngpg/g' -e 's;ref/[a-zA-Z0-9_-]*/;sha1/e911bdebe5c8faa3800735d1568fcdca65db60df/;g' /etc/yum.repos.d/ceph.repo ; fi 2026-03-10T06:38:37.222 INFO:teuthology.task.install.rpm:Installing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd, bzip2, perl-Test-Harness, python3-xmltodict, python3-jmespath on remote rpm x86_64 2026-03-10T06:38:37.222 DEBUG:teuthology.orchestra.run.vm01:> if test -f /etc/yum.repos.d/ceph.repo ; then sudo sed -i -e ':a;N;$!ba;s/enabled=1\ngpg/enabled=1\npriority=1\ngpg/g' -e 's;ref/[a-zA-Z0-9_-]*/;sha1/e911bdebe5c8faa3800735d1568fcdca65db60df/;g' /etc/yum.repos.d/ceph.repo ; fi 2026-03-10T06:38:37.241 INFO:teuthology.task.install.rpm:Installing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd, bzip2, perl-Test-Harness, python3-xmltodict, python3-jmespath on remote rpm x86_64 2026-03-10T06:38:37.241 DEBUG:teuthology.orchestra.run.vm09:> if test -f /etc/yum.repos.d/ceph.repo ; then sudo sed -i -e ':a;N;$!ba;s/enabled=1\ngpg/enabled=1\npriority=1\ngpg/g' -e 's;ref/[a-zA-Z0-9_-]*/;sha1/e911bdebe5c8faa3800735d1568fcdca65db60df/;g' /etc/yum.repos.d/ceph.repo ; fi 2026-03-10T06:38:37.301 DEBUG:teuthology.orchestra.run.vm08:> sudo touch -a /etc/yum/pluginconf.d/priorities.conf ; test -e /etc/yum/pluginconf.d/priorities.conf.orig || sudo cp -af /etc/yum/pluginconf.d/priorities.conf /etc/yum/pluginconf.d/priorities.conf.orig 2026-03-10T06:38:37.302 DEBUG:teuthology.orchestra.run.vm01:> sudo touch -a /etc/yum/pluginconf.d/priorities.conf ; test -e /etc/yum/pluginconf.d/priorities.conf.orig || sudo cp -af /etc/yum/pluginconf.d/priorities.conf /etc/yum/pluginconf.d/priorities.conf.orig 2026-03-10T06:38:37.319 DEBUG:teuthology.orchestra.run.vm09:> sudo touch -a /etc/yum/pluginconf.d/priorities.conf ; test -e /etc/yum/pluginconf.d/priorities.conf.orig || sudo cp -af /etc/yum/pluginconf.d/priorities.conf /etc/yum/pluginconf.d/priorities.conf.orig 2026-03-10T06:38:37.384 DEBUG:teuthology.orchestra.run.vm01:> grep check_obsoletes /etc/yum/pluginconf.d/priorities.conf && sudo sed -i 's/check_obsoletes.*0/check_obsoletes = 1/g' /etc/yum/pluginconf.d/priorities.conf || echo 'check_obsoletes = 1' | sudo tee -a /etc/yum/pluginconf.d/priorities.conf 2026-03-10T06:38:37.390 DEBUG:teuthology.orchestra.run.vm08:> grep check_obsoletes /etc/yum/pluginconf.d/priorities.conf && sudo sed -i 's/check_obsoletes.*0/check_obsoletes = 1/g' /etc/yum/pluginconf.d/priorities.conf || echo 'check_obsoletes = 1' | sudo tee -a /etc/yum/pluginconf.d/priorities.conf 2026-03-10T06:38:37.415 DEBUG:teuthology.orchestra.run.vm09:> grep check_obsoletes /etc/yum/pluginconf.d/priorities.conf && sudo sed -i 's/check_obsoletes.*0/check_obsoletes = 1/g' /etc/yum/pluginconf.d/priorities.conf || echo 'check_obsoletes = 1' | sudo tee -a /etc/yum/pluginconf.d/priorities.conf 2026-03-10T06:38:37.423 INFO:teuthology.orchestra.run.vm08.stdout:check_obsoletes = 1 2026-03-10T06:38:37.424 DEBUG:teuthology.orchestra.run.vm08:> sudo yum clean all 2026-03-10T06:38:37.454 INFO:teuthology.orchestra.run.vm01.stdout:check_obsoletes = 1 2026-03-10T06:38:37.455 DEBUG:teuthology.orchestra.run.vm01:> sudo yum clean all 2026-03-10T06:38:37.462 INFO:teuthology.orchestra.run.vm09.stdout:check_obsoletes = 1 2026-03-10T06:38:37.470 DEBUG:teuthology.orchestra.run.vm09:> sudo yum clean all 2026-03-10T06:38:37.612 INFO:teuthology.orchestra.run.vm08.stdout:41 files removed 2026-03-10T06:38:37.644 DEBUG:teuthology.orchestra.run.vm08:> sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd bzip2 perl-Test-Harness python3-xmltodict python3-jmespath 2026-03-10T06:38:37.664 INFO:teuthology.orchestra.run.vm01.stdout:41 files removed 2026-03-10T06:38:37.665 INFO:teuthology.orchestra.run.vm09.stdout:41 files removed 2026-03-10T06:38:37.703 DEBUG:teuthology.orchestra.run.vm01:> sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd bzip2 perl-Test-Harness python3-xmltodict python3-jmespath 2026-03-10T06:38:37.704 DEBUG:teuthology.orchestra.run.vm09:> sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd bzip2 perl-Test-Harness python3-xmltodict python3-jmespath 2026-03-10T06:38:39.412 INFO:teuthology.orchestra.run.vm08.stdout:ceph packages for x86_64 53 kB/s | 84 kB 00:01 2026-03-10T06:38:39.427 INFO:teuthology.orchestra.run.vm01.stdout:ceph packages for x86_64 54 kB/s | 84 kB 00:01 2026-03-10T06:38:39.437 INFO:teuthology.orchestra.run.vm09.stdout:ceph packages for x86_64 55 kB/s | 84 kB 00:01 2026-03-10T06:38:40.390 INFO:teuthology.orchestra.run.vm09.stdout:ceph noarch packages 13 kB/s | 12 kB 00:00 2026-03-10T06:38:40.399 INFO:teuthology.orchestra.run.vm08.stdout:ceph noarch packages 12 kB/s | 12 kB 00:00 2026-03-10T06:38:40.410 INFO:teuthology.orchestra.run.vm01.stdout:ceph noarch packages 12 kB/s | 12 kB 00:00 2026-03-10T06:38:41.345 INFO:teuthology.orchestra.run.vm08.stdout:ceph source packages 2.0 kB/s | 1.9 kB 00:00 2026-03-10T06:38:41.378 INFO:teuthology.orchestra.run.vm01.stdout:ceph source packages 2.0 kB/s | 1.9 kB 00:00 2026-03-10T06:38:41.379 INFO:teuthology.orchestra.run.vm09.stdout:ceph source packages 2.0 kB/s | 1.9 kB 00:00 2026-03-10T06:38:42.304 INFO:teuthology.orchestra.run.vm01.stdout:CentOS Stream 9 - BaseOS 9.8 MB/s | 8.9 MB 00:00 2026-03-10T06:38:42.518 INFO:teuthology.orchestra.run.vm09.stdout:CentOS Stream 9 - BaseOS 8.0 MB/s | 8.9 MB 00:01 2026-03-10T06:38:43.726 INFO:teuthology.orchestra.run.vm08.stdout:CentOS Stream 9 - BaseOS 3.8 MB/s | 8.9 MB 00:02 2026-03-10T06:38:43.986 INFO:teuthology.orchestra.run.vm09.stdout:CentOS Stream 9 - AppStream 44 MB/s | 27 MB 00:00 2026-03-10T06:38:45.079 INFO:teuthology.orchestra.run.vm01.stdout:CentOS Stream 9 - AppStream 13 MB/s | 27 MB 00:02 2026-03-10T06:38:46.485 INFO:teuthology.orchestra.run.vm08.stdout:CentOS Stream 9 - AppStream 13 MB/s | 27 MB 00:02 2026-03-10T06:38:48.947 INFO:teuthology.orchestra.run.vm01.stdout:CentOS Stream 9 - CRB 7.1 MB/s | 8.0 MB 00:01 2026-03-10T06:38:49.030 INFO:teuthology.orchestra.run.vm09.stdout:CentOS Stream 9 - CRB 5.4 MB/s | 8.0 MB 00:01 2026-03-10T06:38:50.221 INFO:teuthology.orchestra.run.vm01.stdout:CentOS Stream 9 - Extras packages 48 kB/s | 20 kB 00:00 2026-03-10T06:38:50.389 INFO:teuthology.orchestra.run.vm08.stdout:CentOS Stream 9 - CRB 6.9 MB/s | 8.0 MB 00:01 2026-03-10T06:38:50.660 INFO:teuthology.orchestra.run.vm09.stdout:CentOS Stream 9 - Extras packages 34 kB/s | 20 kB 00:00 2026-03-10T06:38:51.435 INFO:teuthology.orchestra.run.vm01.stdout:Extra Packages for Enterprise Linux 18 MB/s | 20 MB 00:01 2026-03-10T06:38:51.595 INFO:teuthology.orchestra.run.vm08.stdout:CentOS Stream 9 - Extras packages 58 kB/s | 20 kB 00:00 2026-03-10T06:38:52.091 INFO:teuthology.orchestra.run.vm09.stdout:Extra Packages for Enterprise Linux 15 MB/s | 20 MB 00:01 2026-03-10T06:38:52.867 INFO:teuthology.orchestra.run.vm08.stdout:Extra Packages for Enterprise Linux 17 MB/s | 20 MB 00:01 2026-03-10T06:38:56.585 INFO:teuthology.orchestra.run.vm01.stdout:lab-extras 47 kB/s | 50 kB 00:01 2026-03-10T06:38:57.101 INFO:teuthology.orchestra.run.vm09.stdout:lab-extras 63 kB/s | 50 kB 00:00 2026-03-10T06:38:57.786 INFO:teuthology.orchestra.run.vm08.stdout:lab-extras 62 kB/s | 50 kB 00:00 2026-03-10T06:38:58.034 INFO:teuthology.orchestra.run.vm01.stdout:Package librados2-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-10T06:38:58.034 INFO:teuthology.orchestra.run.vm01.stdout:Package librbd1-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-10T06:38:58.038 INFO:teuthology.orchestra.run.vm01.stdout:Package bzip2-1.0.8-11.el9.x86_64 is already installed. 2026-03-10T06:38:58.039 INFO:teuthology.orchestra.run.vm01.stdout:Package perl-Test-Harness-1:3.42-461.el9.noarch is already installed. 2026-03-10T06:38:58.068 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T06:38:58.072 INFO:teuthology.orchestra.run.vm01.stdout:====================================================================================== 2026-03-10T06:38:58.072 INFO:teuthology.orchestra.run.vm01.stdout: Package Arch Version Repository Size 2026-03-10T06:38:58.072 INFO:teuthology.orchestra.run.vm01.stdout:====================================================================================== 2026-03-10T06:38:58.072 INFO:teuthology.orchestra.run.vm01.stdout:Installing: 2026-03-10T06:38:58.072 INFO:teuthology.orchestra.run.vm01.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 6.5 k 2026-03-10T06:38:58.072 INFO:teuthology.orchestra.run.vm01.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.5 M 2026-03-10T06:38:58.072 INFO:teuthology.orchestra.run.vm01.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.2 M 2026-03-10T06:38:58.072 INFO:teuthology.orchestra.run.vm01.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 145 k 2026-03-10T06:38:58.072 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.1 M 2026-03-10T06:38:58.072 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 150 k 2026-03-10T06:38:58.073 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 3.8 M 2026-03-10T06:38:58.073 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 7.4 M 2026-03-10T06:38:58.073 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 49 k 2026-03-10T06:38:58.073 INFO:teuthology.orchestra.run.vm01.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 11 M 2026-03-10T06:38:58.073 INFO:teuthology.orchestra.run.vm01.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 50 M 2026-03-10T06:38:58.073 INFO:teuthology.orchestra.run.vm01.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 299 k 2026-03-10T06:38:58.073 INFO:teuthology.orchestra.run.vm01.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 769 k 2026-03-10T06:38:58.073 INFO:teuthology.orchestra.run.vm01.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 34 k 2026-03-10T06:38:58.073 INFO:teuthology.orchestra.run.vm01.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.0 M 2026-03-10T06:38:58.073 INFO:teuthology.orchestra.run.vm01.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 127 k 2026-03-10T06:38:58.073 INFO:teuthology.orchestra.run.vm01.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 165 k 2026-03-10T06:38:58.073 INFO:teuthology.orchestra.run.vm01.stdout: python3-jmespath noarch 1.0.1-1.el9 appstream 48 k 2026-03-10T06:38:58.073 INFO:teuthology.orchestra.run.vm01.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 323 k 2026-03-10T06:38:58.073 INFO:teuthology.orchestra.run.vm01.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 303 k 2026-03-10T06:38:58.073 INFO:teuthology.orchestra.run.vm01.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 100 k 2026-03-10T06:38:58.073 INFO:teuthology.orchestra.run.vm01.stdout: python3-xmltodict noarch 0.12.0-15.el9 epel 22 k 2026-03-10T06:38:58.073 INFO:teuthology.orchestra.run.vm01.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 85 k 2026-03-10T06:38:58.073 INFO:teuthology.orchestra.run.vm01.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.1 M 2026-03-10T06:38:58.073 INFO:teuthology.orchestra.run.vm01.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 171 k 2026-03-10T06:38:58.073 INFO:teuthology.orchestra.run.vm01.stdout:Upgrading: 2026-03-10T06:38:58.073 INFO:teuthology.orchestra.run.vm01.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.4 M 2026-03-10T06:38:58.073 INFO:teuthology.orchestra.run.vm01.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.2 M 2026-03-10T06:38:58.073 INFO:teuthology.orchestra.run.vm01.stdout:Installing dependencies: 2026-03-10T06:38:58.073 INFO:teuthology.orchestra.run.vm01.stdout: abseil-cpp x86_64 20211102.0-4.el9 epel 551 k 2026-03-10T06:38:58.073 INFO:teuthology.orchestra.run.vm01.stdout: boost-program-options x86_64 1.75.0-13.el9 appstream 104 k 2026-03-10T06:38:58.073 INFO:teuthology.orchestra.run.vm01.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 22 M 2026-03-10T06:38:58.073 INFO:teuthology.orchestra.run.vm01.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 31 k 2026-03-10T06:38:58.073 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 2.4 M 2026-03-10T06:38:58.073 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 253 k 2026-03-10T06:38:58.073 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 4.7 M 2026-03-10T06:38:58.073 INFO:teuthology.orchestra.run.vm01.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 17 M 2026-03-10T06:38:58.073 INFO:teuthology.orchestra.run.vm01.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 17 k 2026-03-10T06:38:58.073 INFO:teuthology.orchestra.run.vm01.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 25 k 2026-03-10T06:38:58.073 INFO:teuthology.orchestra.run.vm01.stdout: cryptsetup x86_64 2.8.1-3.el9 baseos 351 k 2026-03-10T06:38:58.074 INFO:teuthology.orchestra.run.vm01.stdout: flexiblas x86_64 3.0.4-9.el9 appstream 30 k 2026-03-10T06:38:58.074 INFO:teuthology.orchestra.run.vm01.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 appstream 3.0 M 2026-03-10T06:38:58.074 INFO:teuthology.orchestra.run.vm01.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 appstream 15 k 2026-03-10T06:38:58.074 INFO:teuthology.orchestra.run.vm01.stdout: gperftools-libs x86_64 2.9.1-3.el9 epel 308 k 2026-03-10T06:38:58.074 INFO:teuthology.orchestra.run.vm01.stdout: grpc-data noarch 1.46.7-10.el9 epel 19 k 2026-03-10T06:38:58.074 INFO:teuthology.orchestra.run.vm01.stdout: ledmon-libs x86_64 1.1.0-3.el9 baseos 40 k 2026-03-10T06:38:58.074 INFO:teuthology.orchestra.run.vm01.stdout: libarrow x86_64 9.0.0-15.el9 epel 4.4 M 2026-03-10T06:38:58.074 INFO:teuthology.orchestra.run.vm01.stdout: libarrow-doc noarch 9.0.0-15.el9 epel 25 k 2026-03-10T06:38:58.074 INFO:teuthology.orchestra.run.vm01.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 163 k 2026-03-10T06:38:58.074 INFO:teuthology.orchestra.run.vm01.stdout: libconfig x86_64 1.7.2-9.el9 baseos 72 k 2026-03-10T06:38:58.074 INFO:teuthology.orchestra.run.vm01.stdout: libgfortran x86_64 11.5.0-14.el9 baseos 794 k 2026-03-10T06:38:58.074 INFO:teuthology.orchestra.run.vm01.stdout: libnbd x86_64 1.20.3-4.el9 appstream 164 k 2026-03-10T06:38:58.074 INFO:teuthology.orchestra.run.vm01.stdout: liboath x86_64 2.6.12-1.el9 epel 49 k 2026-03-10T06:38:58.074 INFO:teuthology.orchestra.run.vm01.stdout: libpmemobj x86_64 1.12.1-1.el9 appstream 160 k 2026-03-10T06:38:58.074 INFO:teuthology.orchestra.run.vm01.stdout: libquadmath x86_64 11.5.0-14.el9 baseos 184 k 2026-03-10T06:38:58.074 INFO:teuthology.orchestra.run.vm01.stdout: librabbitmq x86_64 0.11.0-7.el9 appstream 45 k 2026-03-10T06:38:58.074 INFO:teuthology.orchestra.run.vm01.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 503 k 2026-03-10T06:38:58.074 INFO:teuthology.orchestra.run.vm01.stdout: librdkafka x86_64 1.6.1-102.el9 appstream 662 k 2026-03-10T06:38:58.074 INFO:teuthology.orchestra.run.vm01.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.4 M 2026-03-10T06:38:58.074 INFO:teuthology.orchestra.run.vm01.stdout: libstoragemgmt x86_64 1.10.1-1.el9 appstream 246 k 2026-03-10T06:38:58.074 INFO:teuthology.orchestra.run.vm01.stdout: libunwind x86_64 1.6.2-1.el9 epel 67 k 2026-03-10T06:38:58.074 INFO:teuthology.orchestra.run.vm01.stdout: libxslt x86_64 1.1.34-12.el9 appstream 233 k 2026-03-10T06:38:58.074 INFO:teuthology.orchestra.run.vm01.stdout: lttng-ust x86_64 2.12.0-6.el9 appstream 292 k 2026-03-10T06:38:58.074 INFO:teuthology.orchestra.run.vm01.stdout: lua x86_64 5.4.4-4.el9 appstream 188 k 2026-03-10T06:38:58.074 INFO:teuthology.orchestra.run.vm01.stdout: lua-devel x86_64 5.4.4-4.el9 crb 22 k 2026-03-10T06:38:58.074 INFO:teuthology.orchestra.run.vm01.stdout: luarocks noarch 3.9.2-5.el9 epel 151 k 2026-03-10T06:38:58.074 INFO:teuthology.orchestra.run.vm01.stdout: mailcap noarch 2.1.49-5.el9 baseos 33 k 2026-03-10T06:38:58.074 INFO:teuthology.orchestra.run.vm01.stdout: openblas x86_64 0.3.29-1.el9 appstream 42 k 2026-03-10T06:38:58.074 INFO:teuthology.orchestra.run.vm01.stdout: openblas-openmp x86_64 0.3.29-1.el9 appstream 5.3 M 2026-03-10T06:38:58.074 INFO:teuthology.orchestra.run.vm01.stdout: parquet-libs x86_64 9.0.0-15.el9 epel 838 k 2026-03-10T06:38:58.074 INFO:teuthology.orchestra.run.vm01.stdout: pciutils x86_64 3.7.0-7.el9 baseos 93 k 2026-03-10T06:38:58.074 INFO:teuthology.orchestra.run.vm01.stdout: protobuf x86_64 3.14.0-17.el9 appstream 1.0 M 2026-03-10T06:38:58.074 INFO:teuthology.orchestra.run.vm01.stdout: protobuf-compiler x86_64 3.14.0-17.el9 crb 862 k 2026-03-10T06:38:58.074 INFO:teuthology.orchestra.run.vm01.stdout: python3-asyncssh noarch 2.13.2-5.el9 epel 548 k 2026-03-10T06:38:58.074 INFO:teuthology.orchestra.run.vm01.stdout: python3-autocommand noarch 2.2.2-8.el9 epel 29 k 2026-03-10T06:38:58.074 INFO:teuthology.orchestra.run.vm01.stdout: python3-babel noarch 2.9.1-2.el9 appstream 6.0 M 2026-03-10T06:38:58.074 INFO:teuthology.orchestra.run.vm01.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 epel 60 k 2026-03-10T06:38:58.074 INFO:teuthology.orchestra.run.vm01.stdout: python3-bcrypt x86_64 3.2.2-1.el9 epel 43 k 2026-03-10T06:38:58.074 INFO:teuthology.orchestra.run.vm01.stdout: python3-cachetools noarch 4.2.4-1.el9 epel 32 k 2026-03-10T06:38:58.074 INFO:teuthology.orchestra.run.vm01.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 45 k 2026-03-10T06:38:58.074 INFO:teuthology.orchestra.run.vm01.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 142 k 2026-03-10T06:38:58.074 INFO:teuthology.orchestra.run.vm01.stdout: python3-certifi noarch 2023.05.07-4.el9 epel 14 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: python3-cffi x86_64 1.14.5-5.el9 baseos 253 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: python3-cheroot noarch 10.0.1-4.el9 epel 173 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: python3-cherrypy noarch 18.6.1-2.el9 epel 358 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: python3-cryptography x86_64 36.0.1-5.el9 baseos 1.2 M 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: python3-devel x86_64 3.9.25-3.el9 appstream 244 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: python3-google-auth noarch 1:2.45.0-1.el9 epel 254 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: python3-grpcio x86_64 1.46.7-10.el9 epel 2.0 M 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 epel 144 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco noarch 8.2.1-3.el9 epel 11 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 epel 18 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 epel 23 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-context noarch 6.0.1-3.el9 epel 20 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 epel 19 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-text noarch 4.0.0-2.el9 epel 26 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: python3-jinja2 noarch 2.11.3-8.el9 appstream 249 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 epel 1.0 M 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 appstream 177 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: python3-logutils noarch 0.3.5-21.el9 epel 46 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: python3-mako noarch 1.1.4-6.el9 appstream 172 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: python3-markupsafe x86_64 1.1.1-12.el9 appstream 35 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: python3-more-itertools noarch 8.12.0-2.el9 epel 79 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: python3-natsort noarch 7.1.1-5.el9 epel 58 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: python3-numpy x86_64 1:1.23.5-2.el9 appstream 6.1 M 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 appstream 442 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: python3-packaging noarch 20.9-5.el9 appstream 77 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: python3-pecan noarch 1.4.2-3.el9 epel 272 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: python3-ply noarch 3.11-14.el9 baseos 106 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: python3-portend noarch 3.1.0-2.el9 epel 16 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: python3-protobuf noarch 3.14.0-17.el9 appstream 267 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 epel 90 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyasn1 noarch 0.4.8-7.el9 appstream 157 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 appstream 277 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: python3-pycparser noarch 2.20-6.el9 baseos 135 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: python3-repoze-lru noarch 0.7-16.el9 epel 31 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: python3-requests noarch 2.25.1-10.el9 baseos 126 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 appstream 54 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: python3-routes noarch 2.5.1-5.el9 epel 188 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: python3-rsa noarch 4.9-2.el9 epel 59 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: python3-scipy x86_64 1.9.3-2.el9 appstream 19 M 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: python3-tempora noarch 5.0.0-2.el9 epel 36 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: python3-toml noarch 0.10.2-6.el9 appstream 42 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: python3-typing-extensions noarch 4.15.0-1.el9 epel 86 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: python3-urllib3 noarch 1.26.5-7.el9 baseos 218 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: python3-webob noarch 1.8.8-2.el9 epel 230 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: python3-websocket-client noarch 1.2.3-2.el9 epel 90 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 epel 427 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: python3-zc-lockfile noarch 2.0-10.el9 epel 20 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: qatlib x86_64 25.08.0-2.el9 appstream 240 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: qatzip-libs x86_64 1.3.1-1.el9 appstream 66 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: re2 x86_64 1:20211101-20.el9 epel 191 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: socat x86_64 1.7.4.1-8.el9 appstream 303 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: thrift x86_64 0.15.0-4.el9 epel 1.6 M 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: unzip x86_64 6.0-59.el9 baseos 182 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: xmlstarlet x86_64 1.6.1-20.el9 appstream 64 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: zip x86_64 3.0-35.el9 baseos 266 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout:Installing weak dependencies: 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: qatlib-service x86_64 25.08.0-2.el9 appstream 37 k 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout:Transaction Summary 2026-03-10T06:38:58.075 INFO:teuthology.orchestra.run.vm01.stdout:====================================================================================== 2026-03-10T06:38:58.076 INFO:teuthology.orchestra.run.vm01.stdout:Install 134 Packages 2026-03-10T06:38:58.076 INFO:teuthology.orchestra.run.vm01.stdout:Upgrade 2 Packages 2026-03-10T06:38:58.076 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:38:58.076 INFO:teuthology.orchestra.run.vm01.stdout:Total download size: 210 M 2026-03-10T06:38:58.076 INFO:teuthology.orchestra.run.vm01.stdout:Downloading Packages: 2026-03-10T06:38:58.533 INFO:teuthology.orchestra.run.vm09.stdout:Package librados2-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-10T06:38:58.533 INFO:teuthology.orchestra.run.vm09.stdout:Package librbd1-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-10T06:38:58.538 INFO:teuthology.orchestra.run.vm09.stdout:Package bzip2-1.0.8-11.el9.x86_64 is already installed. 2026-03-10T06:38:58.538 INFO:teuthology.orchestra.run.vm09.stdout:Package perl-Test-Harness-1:3.42-461.el9.noarch is already installed. 2026-03-10T06:38:58.569 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T06:38:58.574 INFO:teuthology.orchestra.run.vm09.stdout:====================================================================================== 2026-03-10T06:38:58.575 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-10T06:38:58.575 INFO:teuthology.orchestra.run.vm09.stdout:====================================================================================== 2026-03-10T06:38:58.575 INFO:teuthology.orchestra.run.vm09.stdout:Installing: 2026-03-10T06:38:58.575 INFO:teuthology.orchestra.run.vm09.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 6.5 k 2026-03-10T06:38:58.575 INFO:teuthology.orchestra.run.vm09.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.5 M 2026-03-10T06:38:58.575 INFO:teuthology.orchestra.run.vm09.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.2 M 2026-03-10T06:38:58.575 INFO:teuthology.orchestra.run.vm09.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 145 k 2026-03-10T06:38:58.575 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.1 M 2026-03-10T06:38:58.575 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 150 k 2026-03-10T06:38:58.575 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 3.8 M 2026-03-10T06:38:58.575 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 7.4 M 2026-03-10T06:38:58.575 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 49 k 2026-03-10T06:38:58.575 INFO:teuthology.orchestra.run.vm09.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 11 M 2026-03-10T06:38:58.575 INFO:teuthology.orchestra.run.vm09.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 50 M 2026-03-10T06:38:58.575 INFO:teuthology.orchestra.run.vm09.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 299 k 2026-03-10T06:38:58.575 INFO:teuthology.orchestra.run.vm09.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 769 k 2026-03-10T06:38:58.575 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 34 k 2026-03-10T06:38:58.575 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.0 M 2026-03-10T06:38:58.575 INFO:teuthology.orchestra.run.vm09.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 127 k 2026-03-10T06:38:58.575 INFO:teuthology.orchestra.run.vm09.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 165 k 2026-03-10T06:38:58.575 INFO:teuthology.orchestra.run.vm09.stdout: python3-jmespath noarch 1.0.1-1.el9 appstream 48 k 2026-03-10T06:38:58.575 INFO:teuthology.orchestra.run.vm09.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 323 k 2026-03-10T06:38:58.575 INFO:teuthology.orchestra.run.vm09.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 303 k 2026-03-10T06:38:58.575 INFO:teuthology.orchestra.run.vm09.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 100 k 2026-03-10T06:38:58.575 INFO:teuthology.orchestra.run.vm09.stdout: python3-xmltodict noarch 0.12.0-15.el9 epel 22 k 2026-03-10T06:38:58.575 INFO:teuthology.orchestra.run.vm09.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 85 k 2026-03-10T06:38:58.575 INFO:teuthology.orchestra.run.vm09.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.1 M 2026-03-10T06:38:58.575 INFO:teuthology.orchestra.run.vm09.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 171 k 2026-03-10T06:38:58.575 INFO:teuthology.orchestra.run.vm09.stdout:Upgrading: 2026-03-10T06:38:58.575 INFO:teuthology.orchestra.run.vm09.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.4 M 2026-03-10T06:38:58.575 INFO:teuthology.orchestra.run.vm09.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.2 M 2026-03-10T06:38:58.575 INFO:teuthology.orchestra.run.vm09.stdout:Installing dependencies: 2026-03-10T06:38:58.575 INFO:teuthology.orchestra.run.vm09.stdout: abseil-cpp x86_64 20211102.0-4.el9 epel 551 k 2026-03-10T06:38:58.575 INFO:teuthology.orchestra.run.vm09.stdout: boost-program-options x86_64 1.75.0-13.el9 appstream 104 k 2026-03-10T06:38:58.575 INFO:teuthology.orchestra.run.vm09.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 22 M 2026-03-10T06:38:58.576 INFO:teuthology.orchestra.run.vm09.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 31 k 2026-03-10T06:38:58.576 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 2.4 M 2026-03-10T06:38:58.576 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 253 k 2026-03-10T06:38:58.576 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 4.7 M 2026-03-10T06:38:58.576 INFO:teuthology.orchestra.run.vm09.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 17 M 2026-03-10T06:38:58.576 INFO:teuthology.orchestra.run.vm09.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 17 k 2026-03-10T06:38:58.576 INFO:teuthology.orchestra.run.vm09.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 25 k 2026-03-10T06:38:58.576 INFO:teuthology.orchestra.run.vm09.stdout: cryptsetup x86_64 2.8.1-3.el9 baseos 351 k 2026-03-10T06:38:58.576 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas x86_64 3.0.4-9.el9 appstream 30 k 2026-03-10T06:38:58.576 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 appstream 3.0 M 2026-03-10T06:38:58.576 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 appstream 15 k 2026-03-10T06:38:58.576 INFO:teuthology.orchestra.run.vm09.stdout: gperftools-libs x86_64 2.9.1-3.el9 epel 308 k 2026-03-10T06:38:58.576 INFO:teuthology.orchestra.run.vm09.stdout: grpc-data noarch 1.46.7-10.el9 epel 19 k 2026-03-10T06:38:58.576 INFO:teuthology.orchestra.run.vm09.stdout: ledmon-libs x86_64 1.1.0-3.el9 baseos 40 k 2026-03-10T06:38:58.576 INFO:teuthology.orchestra.run.vm09.stdout: libarrow x86_64 9.0.0-15.el9 epel 4.4 M 2026-03-10T06:38:58.576 INFO:teuthology.orchestra.run.vm09.stdout: libarrow-doc noarch 9.0.0-15.el9 epel 25 k 2026-03-10T06:38:58.576 INFO:teuthology.orchestra.run.vm09.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 163 k 2026-03-10T06:38:58.576 INFO:teuthology.orchestra.run.vm09.stdout: libconfig x86_64 1.7.2-9.el9 baseos 72 k 2026-03-10T06:38:58.576 INFO:teuthology.orchestra.run.vm09.stdout: libgfortran x86_64 11.5.0-14.el9 baseos 794 k 2026-03-10T06:38:58.576 INFO:teuthology.orchestra.run.vm09.stdout: libnbd x86_64 1.20.3-4.el9 appstream 164 k 2026-03-10T06:38:58.576 INFO:teuthology.orchestra.run.vm09.stdout: liboath x86_64 2.6.12-1.el9 epel 49 k 2026-03-10T06:38:58.576 INFO:teuthology.orchestra.run.vm09.stdout: libpmemobj x86_64 1.12.1-1.el9 appstream 160 k 2026-03-10T06:38:58.576 INFO:teuthology.orchestra.run.vm09.stdout: libquadmath x86_64 11.5.0-14.el9 baseos 184 k 2026-03-10T06:38:58.576 INFO:teuthology.orchestra.run.vm09.stdout: librabbitmq x86_64 0.11.0-7.el9 appstream 45 k 2026-03-10T06:38:58.576 INFO:teuthology.orchestra.run.vm09.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 503 k 2026-03-10T06:38:58.576 INFO:teuthology.orchestra.run.vm09.stdout: librdkafka x86_64 1.6.1-102.el9 appstream 662 k 2026-03-10T06:38:58.576 INFO:teuthology.orchestra.run.vm09.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.4 M 2026-03-10T06:38:58.576 INFO:teuthology.orchestra.run.vm09.stdout: libstoragemgmt x86_64 1.10.1-1.el9 appstream 246 k 2026-03-10T06:38:58.576 INFO:teuthology.orchestra.run.vm09.stdout: libunwind x86_64 1.6.2-1.el9 epel 67 k 2026-03-10T06:38:58.576 INFO:teuthology.orchestra.run.vm09.stdout: libxslt x86_64 1.1.34-12.el9 appstream 233 k 2026-03-10T06:38:58.576 INFO:teuthology.orchestra.run.vm09.stdout: lttng-ust x86_64 2.12.0-6.el9 appstream 292 k 2026-03-10T06:38:58.576 INFO:teuthology.orchestra.run.vm09.stdout: lua x86_64 5.4.4-4.el9 appstream 188 k 2026-03-10T06:38:58.576 INFO:teuthology.orchestra.run.vm09.stdout: lua-devel x86_64 5.4.4-4.el9 crb 22 k 2026-03-10T06:38:58.576 INFO:teuthology.orchestra.run.vm09.stdout: luarocks noarch 3.9.2-5.el9 epel 151 k 2026-03-10T06:38:58.576 INFO:teuthology.orchestra.run.vm09.stdout: mailcap noarch 2.1.49-5.el9 baseos 33 k 2026-03-10T06:38:58.576 INFO:teuthology.orchestra.run.vm09.stdout: openblas x86_64 0.3.29-1.el9 appstream 42 k 2026-03-10T06:38:58.576 INFO:teuthology.orchestra.run.vm09.stdout: openblas-openmp x86_64 0.3.29-1.el9 appstream 5.3 M 2026-03-10T06:38:58.577 INFO:teuthology.orchestra.run.vm09.stdout: parquet-libs x86_64 9.0.0-15.el9 epel 838 k 2026-03-10T06:38:58.577 INFO:teuthology.orchestra.run.vm09.stdout: pciutils x86_64 3.7.0-7.el9 baseos 93 k 2026-03-10T06:38:58.577 INFO:teuthology.orchestra.run.vm09.stdout: protobuf x86_64 3.14.0-17.el9 appstream 1.0 M 2026-03-10T06:38:58.577 INFO:teuthology.orchestra.run.vm09.stdout: protobuf-compiler x86_64 3.14.0-17.el9 crb 862 k 2026-03-10T06:38:58.577 INFO:teuthology.orchestra.run.vm09.stdout: python3-asyncssh noarch 2.13.2-5.el9 epel 548 k 2026-03-10T06:38:58.577 INFO:teuthology.orchestra.run.vm09.stdout: python3-autocommand noarch 2.2.2-8.el9 epel 29 k 2026-03-10T06:38:58.577 INFO:teuthology.orchestra.run.vm09.stdout: python3-babel noarch 2.9.1-2.el9 appstream 6.0 M 2026-03-10T06:38:58.577 INFO:teuthology.orchestra.run.vm09.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 epel 60 k 2026-03-10T06:38:58.577 INFO:teuthology.orchestra.run.vm09.stdout: python3-bcrypt x86_64 3.2.2-1.el9 epel 43 k 2026-03-10T06:38:58.577 INFO:teuthology.orchestra.run.vm09.stdout: python3-cachetools noarch 4.2.4-1.el9 epel 32 k 2026-03-10T06:38:58.577 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 45 k 2026-03-10T06:38:58.577 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 142 k 2026-03-10T06:38:58.577 INFO:teuthology.orchestra.run.vm09.stdout: python3-certifi noarch 2023.05.07-4.el9 epel 14 k 2026-03-10T06:38:58.577 INFO:teuthology.orchestra.run.vm09.stdout: python3-cffi x86_64 1.14.5-5.el9 baseos 253 k 2026-03-10T06:38:58.577 INFO:teuthology.orchestra.run.vm09.stdout: python3-cheroot noarch 10.0.1-4.el9 epel 173 k 2026-03-10T06:38:58.577 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy noarch 18.6.1-2.el9 epel 358 k 2026-03-10T06:38:58.577 INFO:teuthology.orchestra.run.vm09.stdout: python3-cryptography x86_64 36.0.1-5.el9 baseos 1.2 M 2026-03-10T06:38:58.577 INFO:teuthology.orchestra.run.vm09.stdout: python3-devel x86_64 3.9.25-3.el9 appstream 244 k 2026-03-10T06:38:58.577 INFO:teuthology.orchestra.run.vm09.stdout: python3-google-auth noarch 1:2.45.0-1.el9 epel 254 k 2026-03-10T06:38:58.577 INFO:teuthology.orchestra.run.vm09.stdout: python3-grpcio x86_64 1.46.7-10.el9 epel 2.0 M 2026-03-10T06:38:58.577 INFO:teuthology.orchestra.run.vm09.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 epel 144 k 2026-03-10T06:38:58.577 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco noarch 8.2.1-3.el9 epel 11 k 2026-03-10T06:38:58.577 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 epel 18 k 2026-03-10T06:38:58.577 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 epel 23 k 2026-03-10T06:38:58.577 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-context noarch 6.0.1-3.el9 epel 20 k 2026-03-10T06:38:58.577 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 epel 19 k 2026-03-10T06:38:58.577 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-text noarch 4.0.0-2.el9 epel 26 k 2026-03-10T06:38:58.577 INFO:teuthology.orchestra.run.vm09.stdout: python3-jinja2 noarch 2.11.3-8.el9 appstream 249 k 2026-03-10T06:38:58.577 INFO:teuthology.orchestra.run.vm09.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 epel 1.0 M 2026-03-10T06:38:58.577 INFO:teuthology.orchestra.run.vm09.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 appstream 177 k 2026-03-10T06:38:58.577 INFO:teuthology.orchestra.run.vm09.stdout: python3-logutils noarch 0.3.5-21.el9 epel 46 k 2026-03-10T06:38:58.577 INFO:teuthology.orchestra.run.vm09.stdout: python3-mako noarch 1.1.4-6.el9 appstream 172 k 2026-03-10T06:38:58.577 INFO:teuthology.orchestra.run.vm09.stdout: python3-markupsafe x86_64 1.1.1-12.el9 appstream 35 k 2026-03-10T06:38:58.577 INFO:teuthology.orchestra.run.vm09.stdout: python3-more-itertools noarch 8.12.0-2.el9 epel 79 k 2026-03-10T06:38:58.577 INFO:teuthology.orchestra.run.vm09.stdout: python3-natsort noarch 7.1.1-5.el9 epel 58 k 2026-03-10T06:38:58.577 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy x86_64 1:1.23.5-2.el9 appstream 6.1 M 2026-03-10T06:38:58.578 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 appstream 442 k 2026-03-10T06:38:58.578 INFO:teuthology.orchestra.run.vm09.stdout: python3-packaging noarch 20.9-5.el9 appstream 77 k 2026-03-10T06:38:58.578 INFO:teuthology.orchestra.run.vm09.stdout: python3-pecan noarch 1.4.2-3.el9 epel 272 k 2026-03-10T06:38:58.578 INFO:teuthology.orchestra.run.vm09.stdout: python3-ply noarch 3.11-14.el9 baseos 106 k 2026-03-10T06:38:58.578 INFO:teuthology.orchestra.run.vm09.stdout: python3-portend noarch 3.1.0-2.el9 epel 16 k 2026-03-10T06:38:58.578 INFO:teuthology.orchestra.run.vm09.stdout: python3-protobuf noarch 3.14.0-17.el9 appstream 267 k 2026-03-10T06:38:58.578 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 epel 90 k 2026-03-10T06:38:58.578 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1 noarch 0.4.8-7.el9 appstream 157 k 2026-03-10T06:38:58.578 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 appstream 277 k 2026-03-10T06:38:58.578 INFO:teuthology.orchestra.run.vm09.stdout: python3-pycparser noarch 2.20-6.el9 baseos 135 k 2026-03-10T06:38:58.578 INFO:teuthology.orchestra.run.vm09.stdout: python3-repoze-lru noarch 0.7-16.el9 epel 31 k 2026-03-10T06:38:58.578 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests noarch 2.25.1-10.el9 baseos 126 k 2026-03-10T06:38:58.578 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 appstream 54 k 2026-03-10T06:38:58.578 INFO:teuthology.orchestra.run.vm09.stdout: python3-routes noarch 2.5.1-5.el9 epel 188 k 2026-03-10T06:38:58.578 INFO:teuthology.orchestra.run.vm09.stdout: python3-rsa noarch 4.9-2.el9 epel 59 k 2026-03-10T06:38:58.578 INFO:teuthology.orchestra.run.vm09.stdout: python3-scipy x86_64 1.9.3-2.el9 appstream 19 M 2026-03-10T06:38:58.578 INFO:teuthology.orchestra.run.vm09.stdout: python3-tempora noarch 5.0.0-2.el9 epel 36 k 2026-03-10T06:38:58.578 INFO:teuthology.orchestra.run.vm09.stdout: python3-toml noarch 0.10.2-6.el9 appstream 42 k 2026-03-10T06:38:58.578 INFO:teuthology.orchestra.run.vm09.stdout: python3-typing-extensions noarch 4.15.0-1.el9 epel 86 k 2026-03-10T06:38:58.578 INFO:teuthology.orchestra.run.vm09.stdout: python3-urllib3 noarch 1.26.5-7.el9 baseos 218 k 2026-03-10T06:38:58.578 INFO:teuthology.orchestra.run.vm09.stdout: python3-webob noarch 1.8.8-2.el9 epel 230 k 2026-03-10T06:38:58.578 INFO:teuthology.orchestra.run.vm09.stdout: python3-websocket-client noarch 1.2.3-2.el9 epel 90 k 2026-03-10T06:38:58.578 INFO:teuthology.orchestra.run.vm09.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 epel 427 k 2026-03-10T06:38:58.578 INFO:teuthology.orchestra.run.vm09.stdout: python3-zc-lockfile noarch 2.0-10.el9 epel 20 k 2026-03-10T06:38:58.578 INFO:teuthology.orchestra.run.vm09.stdout: qatlib x86_64 25.08.0-2.el9 appstream 240 k 2026-03-10T06:38:58.578 INFO:teuthology.orchestra.run.vm09.stdout: qatzip-libs x86_64 1.3.1-1.el9 appstream 66 k 2026-03-10T06:38:58.578 INFO:teuthology.orchestra.run.vm09.stdout: re2 x86_64 1:20211101-20.el9 epel 191 k 2026-03-10T06:38:58.578 INFO:teuthology.orchestra.run.vm09.stdout: socat x86_64 1.7.4.1-8.el9 appstream 303 k 2026-03-10T06:38:58.578 INFO:teuthology.orchestra.run.vm09.stdout: thrift x86_64 0.15.0-4.el9 epel 1.6 M 2026-03-10T06:38:58.578 INFO:teuthology.orchestra.run.vm09.stdout: unzip x86_64 6.0-59.el9 baseos 182 k 2026-03-10T06:38:58.578 INFO:teuthology.orchestra.run.vm09.stdout: xmlstarlet x86_64 1.6.1-20.el9 appstream 64 k 2026-03-10T06:38:58.578 INFO:teuthology.orchestra.run.vm09.stdout: zip x86_64 3.0-35.el9 baseos 266 k 2026-03-10T06:38:58.578 INFO:teuthology.orchestra.run.vm09.stdout:Installing weak dependencies: 2026-03-10T06:38:58.578 INFO:teuthology.orchestra.run.vm09.stdout: qatlib-service x86_64 25.08.0-2.el9 appstream 37 k 2026-03-10T06:38:58.578 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:38:58.578 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-10T06:38:58.578 INFO:teuthology.orchestra.run.vm09.stdout:====================================================================================== 2026-03-10T06:38:58.578 INFO:teuthology.orchestra.run.vm09.stdout:Install 134 Packages 2026-03-10T06:38:58.578 INFO:teuthology.orchestra.run.vm09.stdout:Upgrade 2 Packages 2026-03-10T06:38:58.579 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:38:58.579 INFO:teuthology.orchestra.run.vm09.stdout:Total download size: 210 M 2026-03-10T06:38:58.579 INFO:teuthology.orchestra.run.vm09.stdout:Downloading Packages: 2026-03-10T06:38:59.185 INFO:teuthology.orchestra.run.vm08.stdout:Package librados2-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-10T06:38:59.185 INFO:teuthology.orchestra.run.vm08.stdout:Package librbd1-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-10T06:38:59.190 INFO:teuthology.orchestra.run.vm08.stdout:Package bzip2-1.0.8-11.el9.x86_64 is already installed. 2026-03-10T06:38:59.190 INFO:teuthology.orchestra.run.vm08.stdout:Package perl-Test-Harness-1:3.42-461.el9.noarch is already installed. 2026-03-10T06:38:59.218 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:38:59.223 INFO:teuthology.orchestra.run.vm08.stdout:====================================================================================== 2026-03-10T06:38:59.223 INFO:teuthology.orchestra.run.vm08.stdout: Package Arch Version Repository Size 2026-03-10T06:38:59.223 INFO:teuthology.orchestra.run.vm08.stdout:====================================================================================== 2026-03-10T06:38:59.223 INFO:teuthology.orchestra.run.vm08.stdout:Installing: 2026-03-10T06:38:59.223 INFO:teuthology.orchestra.run.vm08.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 6.5 k 2026-03-10T06:38:59.223 INFO:teuthology.orchestra.run.vm08.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.5 M 2026-03-10T06:38:59.223 INFO:teuthology.orchestra.run.vm08.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.2 M 2026-03-10T06:38:59.223 INFO:teuthology.orchestra.run.vm08.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 145 k 2026-03-10T06:38:59.223 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.1 M 2026-03-10T06:38:59.223 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 150 k 2026-03-10T06:38:59.223 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 3.8 M 2026-03-10T06:38:59.223 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 7.4 M 2026-03-10T06:38:59.223 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 49 k 2026-03-10T06:38:59.223 INFO:teuthology.orchestra.run.vm08.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 11 M 2026-03-10T06:38:59.223 INFO:teuthology.orchestra.run.vm08.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 50 M 2026-03-10T06:38:59.223 INFO:teuthology.orchestra.run.vm08.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 299 k 2026-03-10T06:38:59.223 INFO:teuthology.orchestra.run.vm08.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 769 k 2026-03-10T06:38:59.223 INFO:teuthology.orchestra.run.vm08.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 34 k 2026-03-10T06:38:59.223 INFO:teuthology.orchestra.run.vm08.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.0 M 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 127 k 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 165 k 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: python3-jmespath noarch 1.0.1-1.el9 appstream 48 k 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 323 k 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 303 k 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 100 k 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: python3-xmltodict noarch 0.12.0-15.el9 epel 22 k 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 85 k 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.1 M 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 171 k 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout:Upgrading: 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.4 M 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.2 M 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout:Installing dependencies: 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: abseil-cpp x86_64 20211102.0-4.el9 epel 551 k 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: boost-program-options x86_64 1.75.0-13.el9 appstream 104 k 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 22 M 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 31 k 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 2.4 M 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 253 k 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 4.7 M 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 17 M 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 17 k 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 25 k 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: cryptsetup x86_64 2.8.1-3.el9 baseos 351 k 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: flexiblas x86_64 3.0.4-9.el9 appstream 30 k 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 appstream 3.0 M 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 appstream 15 k 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: gperftools-libs x86_64 2.9.1-3.el9 epel 308 k 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: grpc-data noarch 1.46.7-10.el9 epel 19 k 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: ledmon-libs x86_64 1.1.0-3.el9 baseos 40 k 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: libarrow x86_64 9.0.0-15.el9 epel 4.4 M 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: libarrow-doc noarch 9.0.0-15.el9 epel 25 k 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 163 k 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: libconfig x86_64 1.7.2-9.el9 baseos 72 k 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: libgfortran x86_64 11.5.0-14.el9 baseos 794 k 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: libnbd x86_64 1.20.3-4.el9 appstream 164 k 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: liboath x86_64 2.6.12-1.el9 epel 49 k 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: libpmemobj x86_64 1.12.1-1.el9 appstream 160 k 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: libquadmath x86_64 11.5.0-14.el9 baseos 184 k 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: librabbitmq x86_64 0.11.0-7.el9 appstream 45 k 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 503 k 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: librdkafka x86_64 1.6.1-102.el9 appstream 662 k 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.4 M 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: libstoragemgmt x86_64 1.10.1-1.el9 appstream 246 k 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: libunwind x86_64 1.6.2-1.el9 epel 67 k 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: libxslt x86_64 1.1.34-12.el9 appstream 233 k 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: lttng-ust x86_64 2.12.0-6.el9 appstream 292 k 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: lua x86_64 5.4.4-4.el9 appstream 188 k 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: lua-devel x86_64 5.4.4-4.el9 crb 22 k 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: luarocks noarch 3.9.2-5.el9 epel 151 k 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: mailcap noarch 2.1.49-5.el9 baseos 33 k 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: openblas x86_64 0.3.29-1.el9 appstream 42 k 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: openblas-openmp x86_64 0.3.29-1.el9 appstream 5.3 M 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: parquet-libs x86_64 9.0.0-15.el9 epel 838 k 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: pciutils x86_64 3.7.0-7.el9 baseos 93 k 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: protobuf x86_64 3.14.0-17.el9 appstream 1.0 M 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: protobuf-compiler x86_64 3.14.0-17.el9 crb 862 k 2026-03-10T06:38:59.224 INFO:teuthology.orchestra.run.vm08.stdout: python3-asyncssh noarch 2.13.2-5.el9 epel 548 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-autocommand noarch 2.2.2-8.el9 epel 29 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-babel noarch 2.9.1-2.el9 appstream 6.0 M 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 epel 60 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-bcrypt x86_64 3.2.2-1.el9 epel 43 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-cachetools noarch 4.2.4-1.el9 epel 32 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 45 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 142 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-certifi noarch 2023.05.07-4.el9 epel 14 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-cffi x86_64 1.14.5-5.el9 baseos 253 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-cheroot noarch 10.0.1-4.el9 epel 173 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-cherrypy noarch 18.6.1-2.el9 epel 358 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-cryptography x86_64 36.0.1-5.el9 baseos 1.2 M 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-devel x86_64 3.9.25-3.el9 appstream 244 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-google-auth noarch 1:2.45.0-1.el9 epel 254 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-grpcio x86_64 1.46.7-10.el9 epel 2.0 M 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 epel 144 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco noarch 8.2.1-3.el9 epel 11 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 epel 18 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 epel 23 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-context noarch 6.0.1-3.el9 epel 20 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 epel 19 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-text noarch 4.0.0-2.el9 epel 26 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-jinja2 noarch 2.11.3-8.el9 appstream 249 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 epel 1.0 M 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 appstream 177 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-logutils noarch 0.3.5-21.el9 epel 46 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-mako noarch 1.1.4-6.el9 appstream 172 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-markupsafe x86_64 1.1.1-12.el9 appstream 35 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-more-itertools noarch 8.12.0-2.el9 epel 79 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-natsort noarch 7.1.1-5.el9 epel 58 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-numpy x86_64 1:1.23.5-2.el9 appstream 6.1 M 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 appstream 442 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-packaging noarch 20.9-5.el9 appstream 77 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-pecan noarch 1.4.2-3.el9 epel 272 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-ply noarch 3.11-14.el9 baseos 106 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-portend noarch 3.1.0-2.el9 epel 16 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-protobuf noarch 3.14.0-17.el9 appstream 267 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 epel 90 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyasn1 noarch 0.4.8-7.el9 appstream 157 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 appstream 277 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-pycparser noarch 2.20-6.el9 baseos 135 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-repoze-lru noarch 0.7-16.el9 epel 31 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-requests noarch 2.25.1-10.el9 baseos 126 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 appstream 54 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-routes noarch 2.5.1-5.el9 epel 188 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-rsa noarch 4.9-2.el9 epel 59 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-scipy x86_64 1.9.3-2.el9 appstream 19 M 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-tempora noarch 5.0.0-2.el9 epel 36 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-toml noarch 0.10.2-6.el9 appstream 42 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-typing-extensions noarch 4.15.0-1.el9 epel 86 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-urllib3 noarch 1.26.5-7.el9 baseos 218 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-webob noarch 1.8.8-2.el9 epel 230 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-websocket-client noarch 1.2.3-2.el9 epel 90 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 epel 427 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: python3-zc-lockfile noarch 2.0-10.el9 epel 20 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: qatlib x86_64 25.08.0-2.el9 appstream 240 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: qatzip-libs x86_64 1.3.1-1.el9 appstream 66 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: re2 x86_64 1:20211101-20.el9 epel 191 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: socat x86_64 1.7.4.1-8.el9 appstream 303 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: thrift x86_64 0.15.0-4.el9 epel 1.6 M 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: unzip x86_64 6.0-59.el9 baseos 182 k 2026-03-10T06:38:59.225 INFO:teuthology.orchestra.run.vm08.stdout: xmlstarlet x86_64 1.6.1-20.el9 appstream 64 k 2026-03-10T06:38:59.226 INFO:teuthology.orchestra.run.vm08.stdout: zip x86_64 3.0-35.el9 baseos 266 k 2026-03-10T06:38:59.226 INFO:teuthology.orchestra.run.vm08.stdout:Installing weak dependencies: 2026-03-10T06:38:59.226 INFO:teuthology.orchestra.run.vm08.stdout: qatlib-service x86_64 25.08.0-2.el9 appstream 37 k 2026-03-10T06:38:59.226 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:38:59.226 INFO:teuthology.orchestra.run.vm08.stdout:Transaction Summary 2026-03-10T06:38:59.226 INFO:teuthology.orchestra.run.vm08.stdout:====================================================================================== 2026-03-10T06:38:59.226 INFO:teuthology.orchestra.run.vm08.stdout:Install 134 Packages 2026-03-10T06:38:59.226 INFO:teuthology.orchestra.run.vm08.stdout:Upgrade 2 Packages 2026-03-10T06:38:59.226 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:38:59.226 INFO:teuthology.orchestra.run.vm08.stdout:Total download size: 210 M 2026-03-10T06:38:59.226 INFO:teuthology.orchestra.run.vm08.stdout:Downloading Packages: 2026-03-10T06:39:00.000 INFO:teuthology.orchestra.run.vm09.stdout:(1/136): ceph-19.2.3-678.ge911bdeb.el9.x86_64.r 10 kB/s | 6.5 kB 00:00 2026-03-10T06:39:00.016 INFO:teuthology.orchestra.run.vm01.stdout:(1/136): ceph-19.2.3-678.ge911bdeb.el9.x86_64.r 10 kB/s | 6.5 kB 00:00 2026-03-10T06:39:01.134 INFO:teuthology.orchestra.run.vm08.stdout:(1/136): ceph-19.2.3-678.ge911bdeb.el9.x86_64.r 11 kB/s | 6.5 kB 00:00 2026-03-10T06:39:01.721 INFO:teuthology.orchestra.run.vm09.stdout:(2/136): ceph-fuse-19.2.3-678.ge911bdeb.el9.x86 685 kB/s | 1.2 MB 00:01 2026-03-10T06:39:02.044 INFO:teuthology.orchestra.run.vm09.stdout:(3/136): ceph-immutable-object-cache-19.2.3-678 451 kB/s | 145 kB 00:00 2026-03-10T06:39:04.685 INFO:teuthology.orchestra.run.vm01.stdout:(2/136): ceph-fuse-19.2.3-678.ge911bdeb.el9.x86 252 kB/s | 1.2 MB 00:04 2026-03-10T06:39:05.096 INFO:teuthology.orchestra.run.vm08.stdout:(2/136): ceph-fuse-19.2.3-678.ge911bdeb.el9.x86 297 kB/s | 1.2 MB 00:03 2026-03-10T06:39:05.293 INFO:teuthology.orchestra.run.vm01.stdout:(3/136): ceph-immutable-object-cache-19.2.3-678 239 kB/s | 145 kB 00:00 2026-03-10T06:39:05.551 INFO:teuthology.orchestra.run.vm08.stdout:(3/136): ceph-immutable-object-cache-19.2.3-678 319 kB/s | 145 kB 00:00 2026-03-10T06:39:05.578 INFO:teuthology.orchestra.run.vm09.stdout:(4/136): ceph-mds-19.2.3-678.ge911bdeb.el9.x86_ 702 kB/s | 2.4 MB 00:03 2026-03-10T06:39:06.976 INFO:teuthology.orchestra.run.vm08.stdout:(4/136): ceph-base-19.2.3-678.ge911bdeb.el9.x86 877 kB/s | 5.5 MB 00:06 2026-03-10T06:39:07.258 INFO:teuthology.orchestra.run.vm09.stdout:(5/136): ceph-mgr-19.2.3-678.ge911bdeb.el9.x86_ 655 kB/s | 1.1 MB 00:01 2026-03-10T06:39:08.071 INFO:teuthology.orchestra.run.vm08.stdout:(5/136): ceph-mgr-19.2.3-678.ge911bdeb.el9.x86_ 1.0 MB/s | 1.1 MB 00:01 2026-03-10T06:39:10.512 INFO:teuthology.orchestra.run.vm01.stdout:(4/136): ceph-mds-19.2.3-678.ge911bdeb.el9.x86_ 475 kB/s | 2.4 MB 00:05 2026-03-10T06:39:10.516 INFO:teuthology.orchestra.run.vm08.stdout:(6/136): ceph-mds-19.2.3-678.ge911bdeb.el9.x86_ 500 kB/s | 2.4 MB 00:04 2026-03-10T06:39:11.043 INFO:teuthology.orchestra.run.vm09.stdout:(6/136): ceph-base-19.2.3-678.ge911bdeb.el9.x86 482 kB/s | 5.5 MB 00:11 2026-03-10T06:39:11.137 INFO:teuthology.orchestra.run.vm01.stdout:(5/136): ceph-base-19.2.3-678.ge911bdeb.el9.x86 479 kB/s | 5.5 MB 00:11 2026-03-10T06:39:11.624 INFO:teuthology.orchestra.run.vm01.stdout:(6/136): ceph-mgr-19.2.3-678.ge911bdeb.el9.x86_ 990 kB/s | 1.1 MB 00:01 2026-03-10T06:39:11.958 INFO:teuthology.orchestra.run.vm09.stdout:(7/136): ceph-mon-19.2.3-678.ge911bdeb.el9.x86_ 1.0 MB/s | 4.7 MB 00:04 2026-03-10T06:39:12.074 INFO:teuthology.orchestra.run.vm08.stdout:(7/136): ceph-mon-19.2.3-678.ge911bdeb.el9.x86_ 1.2 MB/s | 4.7 MB 00:04 2026-03-10T06:39:13.695 INFO:teuthology.orchestra.run.vm01.stdout:(7/136): ceph-mon-19.2.3-678.ge911bdeb.el9.x86_ 1.9 MB/s | 4.7 MB 00:02 2026-03-10T06:39:15.520 INFO:teuthology.orchestra.run.vm09.stdout:(8/136): ceph-common-19.2.3-678.ge911bdeb.el9.x 1.3 MB/s | 22 MB 00:16 2026-03-10T06:39:15.698 INFO:teuthology.orchestra.run.vm09.stdout:(9/136): ceph-selinux-19.2.3-678.ge911bdeb.el9. 141 kB/s | 25 kB 00:00 2026-03-10T06:39:15.948 INFO:teuthology.orchestra.run.vm01.stdout:(8/136): ceph-common-19.2.3-678.ge911bdeb.el9.x 1.3 MB/s | 22 MB 00:16 2026-03-10T06:39:16.020 INFO:teuthology.orchestra.run.vm09.stdout:(10/136): ceph-radosgw-19.2.3-678.ge911bdeb.el9 2.6 MB/s | 11 MB 00:04 2026-03-10T06:39:16.260 INFO:teuthology.orchestra.run.vm09.stdout:(11/136): libcephfs-devel-19.2.3-678.ge911bdeb. 140 kB/s | 34 kB 00:00 2026-03-10T06:39:16.260 INFO:teuthology.orchestra.run.vm01.stdout:(9/136): ceph-selinux-19.2.3-678.ge911bdeb.el9. 80 kB/s | 25 kB 00:00 2026-03-10T06:39:16.452 INFO:teuthology.orchestra.run.vm08.stdout:(8/136): ceph-common-19.2.3-678.ge911bdeb.el9.x 1.4 MB/s | 22 MB 00:15 2026-03-10T06:39:16.606 INFO:teuthology.orchestra.run.vm08.stdout:(9/136): ceph-selinux-19.2.3-678.ge911bdeb.el9. 163 kB/s | 25 kB 00:00 2026-03-10T06:39:16.655 INFO:teuthology.orchestra.run.vm01.stdout:(10/136): ceph-radosgw-19.2.3-678.ge911bdeb.el9 3.6 MB/s | 11 MB 00:02 2026-03-10T06:39:16.692 INFO:teuthology.orchestra.run.vm08.stdout:(10/136): ceph-radosgw-19.2.3-678.ge911bdeb.el9 2.3 MB/s | 11 MB 00:04 2026-03-10T06:39:16.694 INFO:teuthology.orchestra.run.vm09.stdout:(12/136): ceph-osd-19.2.3-678.ge911bdeb.el9.x86 3.0 MB/s | 17 MB 00:05 2026-03-10T06:39:16.697 INFO:teuthology.orchestra.run.vm09.stdout:(13/136): libcephfs2-19.2.3-678.ge911bdeb.el9.x 2.2 MB/s | 1.0 MB 00:00 2026-03-10T06:39:16.814 INFO:teuthology.orchestra.run.vm01.stdout:(11/136): libcephfs-devel-19.2.3-678.ge911bdeb. 212 kB/s | 34 kB 00:00 2026-03-10T06:39:16.853 INFO:teuthology.orchestra.run.vm08.stdout:(11/136): libcephfs-devel-19.2.3-678.ge911bdeb. 209 kB/s | 34 kB 00:00 2026-03-10T06:39:16.865 INFO:teuthology.orchestra.run.vm09.stdout:(14/136): libcephsqlite-19.2.3-678.ge911bdeb.el 958 kB/s | 163 kB 00:00 2026-03-10T06:39:16.869 INFO:teuthology.orchestra.run.vm09.stdout:(15/136): librados-devel-19.2.3-678.ge911bdeb.e 736 kB/s | 127 kB 00:00 2026-03-10T06:39:16.926 INFO:teuthology.orchestra.run.vm08.stdout:(12/136): ceph-osd-19.2.3-678.ge911bdeb.el9.x86 2.7 MB/s | 17 MB 00:06 2026-03-10T06:39:16.983 INFO:teuthology.orchestra.run.vm01.stdout:(12/136): ceph-osd-19.2.3-678.ge911bdeb.el9.x86 3.2 MB/s | 17 MB 00:05 2026-03-10T06:39:17.031 INFO:teuthology.orchestra.run.vm09.stdout:(16/136): libradosstriper1-19.2.3-678.ge911bdeb 3.0 MB/s | 503 kB 00:00 2026-03-10T06:39:17.106 INFO:teuthology.orchestra.run.vm01.stdout:(13/136): libcephfs2-19.2.3-678.ge911bdeb.el9.x 3.3 MB/s | 1.0 MB 00:00 2026-03-10T06:39:17.128 INFO:teuthology.orchestra.run.vm08.stdout:(13/136): libcephsqlite-19.2.3-678.ge911bdeb.el 808 kB/s | 163 kB 00:00 2026-03-10T06:39:17.153 INFO:teuthology.orchestra.run.vm08.stdout:(14/136): libcephfs2-19.2.3-678.ge911bdeb.el9.x 3.3 MB/s | 1.0 MB 00:00 2026-03-10T06:39:17.340 INFO:teuthology.orchestra.run.vm09.stdout:(17/136): python3-ceph-argparse-19.2.3-678.ge91 145 kB/s | 45 kB 00:00 2026-03-10T06:39:17.341 INFO:teuthology.orchestra.run.vm08.stdout:(15/136): libradosstriper1-19.2.3-678.ge911bdeb 2.6 MB/s | 503 kB 00:00 2026-03-10T06:39:17.341 INFO:teuthology.orchestra.run.vm01.stdout:(14/136): libcephsqlite-19.2.3-678.ge911bdeb.el 458 kB/s | 163 kB 00:00 2026-03-10T06:39:17.353 INFO:teuthology.orchestra.run.vm08.stdout:(16/136): librados-devel-19.2.3-678.ge911bdeb.e 563 kB/s | 127 kB 00:00 2026-03-10T06:39:17.353 INFO:teuthology.orchestra.run.vm01.stdout:(15/136): librados-devel-19.2.3-678.ge911bdeb.e 512 kB/s | 127 kB 00:00 2026-03-10T06:39:17.547 INFO:teuthology.orchestra.run.vm01.stdout:(16/136): libradosstriper1-19.2.3-678.ge911bdeb 2.4 MB/s | 503 kB 00:00 2026-03-10T06:39:17.584 INFO:teuthology.orchestra.run.vm08.stdout:(17/136): python3-ceph-argparse-19.2.3-678.ge91 196 kB/s | 45 kB 00:00 2026-03-10T06:39:17.584 INFO:teuthology.orchestra.run.vm09.stdout:(18/136): python3-ceph-common-19.2.3-678.ge911b 586 kB/s | 142 kB 00:00 2026-03-10T06:39:17.690 INFO:teuthology.orchestra.run.vm01.stdout:(17/136): python3-ceph-argparse-19.2.3-678.ge91 317 kB/s | 45 kB 00:00 2026-03-10T06:39:17.725 INFO:teuthology.orchestra.run.vm08.stdout:(18/136): python3-ceph-common-19.2.3-678.ge911b 1.0 MB/s | 142 kB 00:00 2026-03-10T06:39:17.738 INFO:teuthology.orchestra.run.vm09.stdout:(19/136): python3-cephfs-19.2.3-678.ge911bdeb.e 1.0 MB/s | 165 kB 00:00 2026-03-10T06:39:17.838 INFO:teuthology.orchestra.run.vm01.stdout:(18/136): python3-ceph-common-19.2.3-678.ge911b 961 kB/s | 142 kB 00:00 2026-03-10T06:39:17.867 INFO:teuthology.orchestra.run.vm08.stdout:(19/136): python3-cephfs-19.2.3-678.ge911bdeb.e 1.1 MB/s | 165 kB 00:00 2026-03-10T06:39:17.894 INFO:teuthology.orchestra.run.vm09.stdout:(20/136): python3-rados-19.2.3-678.ge911bdeb.el 2.0 MB/s | 323 kB 00:00 2026-03-10T06:39:17.991 INFO:teuthology.orchestra.run.vm01.stdout:(19/136): python3-cephfs-19.2.3-678.ge911bdeb.e 1.1 MB/s | 165 kB 00:00 2026-03-10T06:39:18.013 INFO:teuthology.orchestra.run.vm08.stdout:(20/136): python3-rados-19.2.3-678.ge911bdeb.el 2.2 MB/s | 323 kB 00:00 2026-03-10T06:39:18.072 INFO:teuthology.orchestra.run.vm09.stdout:(21/136): python3-rbd-19.2.3-678.ge911bdeb.el9. 1.7 MB/s | 303 kB 00:00 2026-03-10T06:39:18.121 INFO:teuthology.orchestra.run.vm09.stdout:(22/136): librgw2-19.2.3-678.ge911bdeb.el9.x86_ 4.3 MB/s | 5.4 MB 00:01 2026-03-10T06:39:18.139 INFO:teuthology.orchestra.run.vm01.stdout:(20/136): python3-rados-19.2.3-678.ge911bdeb.el 2.1 MB/s | 323 kB 00:00 2026-03-10T06:39:18.158 INFO:teuthology.orchestra.run.vm08.stdout:(21/136): python3-rbd-19.2.3-678.ge911bdeb.el9. 2.0 MB/s | 303 kB 00:00 2026-03-10T06:39:18.224 INFO:teuthology.orchestra.run.vm09.stdout:(23/136): python3-rgw-19.2.3-678.ge911bdeb.el9. 655 kB/s | 100 kB 00:00 2026-03-10T06:39:18.273 INFO:teuthology.orchestra.run.vm09.stdout:(24/136): rbd-fuse-19.2.3-678.ge911bdeb.el9.x86 560 kB/s | 85 kB 00:00 2026-03-10T06:39:18.287 INFO:teuthology.orchestra.run.vm01.stdout:(21/136): python3-rbd-19.2.3-678.ge911bdeb.el9. 2.0 MB/s | 303 kB 00:00 2026-03-10T06:39:18.303 INFO:teuthology.orchestra.run.vm08.stdout:(22/136): python3-rgw-19.2.3-678.ge911bdeb.el9. 688 kB/s | 100 kB 00:00 2026-03-10T06:39:18.343 INFO:teuthology.orchestra.run.vm01.stdout:(22/136): librgw2-19.2.3-678.ge911bdeb.el9.x86_ 5.5 MB/s | 5.4 MB 00:00 2026-03-10T06:39:18.436 INFO:teuthology.orchestra.run.vm01.stdout:(23/136): python3-rgw-19.2.3-678.ge911bdeb.el9. 670 kB/s | 100 kB 00:00 2026-03-10T06:39:18.437 INFO:teuthology.orchestra.run.vm09.stdout:(25/136): rbd-nbd-19.2.3-678.ge911bdeb.el9.x86_ 1.0 MB/s | 171 kB 00:00 2026-03-10T06:39:18.439 INFO:teuthology.orchestra.run.vm08.stdout:(23/136): rbd-fuse-19.2.3-678.ge911bdeb.el9.x86 628 kB/s | 85 kB 00:00 2026-03-10T06:39:18.494 INFO:teuthology.orchestra.run.vm01.stdout:(24/136): rbd-fuse-19.2.3-678.ge911bdeb.el9.x86 564 kB/s | 85 kB 00:00 2026-03-10T06:39:18.593 INFO:teuthology.orchestra.run.vm08.stdout:(24/136): librgw2-19.2.3-678.ge911bdeb.el9.x86_ 4.3 MB/s | 5.4 MB 00:01 2026-03-10T06:39:18.610 INFO:teuthology.orchestra.run.vm09.stdout:(26/136): ceph-grafana-dashboards-19.2.3-678.ge 180 kB/s | 31 kB 00:00 2026-03-10T06:39:18.637 INFO:teuthology.orchestra.run.vm01.stdout:(25/136): rbd-nbd-19.2.3-678.ge911bdeb.el9.x86_ 1.2 MB/s | 171 kB 00:00 2026-03-10T06:39:18.751 INFO:teuthology.orchestra.run.vm08.stdout:(25/136): rbd-nbd-19.2.3-678.ge911bdeb.el9.x86_ 1.1 MB/s | 171 kB 00:00 2026-03-10T06:39:18.766 INFO:teuthology.orchestra.run.vm09.stdout:(27/136): ceph-mgr-cephadm-19.2.3-678.ge911bdeb 967 kB/s | 150 kB 00:00 2026-03-10T06:39:18.787 INFO:teuthology.orchestra.run.vm01.stdout:(26/136): ceph-grafana-dashboards-19.2.3-678.ge 208 kB/s | 31 kB 00:00 2026-03-10T06:39:18.839 INFO:teuthology.orchestra.run.vm09.stdout:(28/136): rbd-mirror-19.2.3-678.ge911bdeb.el9.x 5.1 MB/s | 3.1 MB 00:00 2026-03-10T06:39:18.904 INFO:teuthology.orchestra.run.vm08.stdout:(26/136): ceph-grafana-dashboards-19.2.3-678.ge 204 kB/s | 31 kB 00:00 2026-03-10T06:39:19.012 INFO:teuthology.orchestra.run.vm08.stdout:(27/136): rbd-mirror-19.2.3-678.ge911bdeb.el9.x 5.4 MB/s | 3.1 MB 00:00 2026-03-10T06:39:19.012 INFO:teuthology.orchestra.run.vm01.stdout:(27/136): rbd-mirror-19.2.3-678.ge911bdeb.el9.x 5.4 MB/s | 3.1 MB 00:00 2026-03-10T06:39:19.068 INFO:teuthology.orchestra.run.vm08.stdout:(28/136): ceph-mgr-cephadm-19.2.3-678.ge911bdeb 916 kB/s | 150 kB 00:00 2026-03-10T06:39:19.070 INFO:teuthology.orchestra.run.vm01.stdout:(28/136): ceph-mgr-cephadm-19.2.3-678.ge911bdeb 531 kB/s | 150 kB 00:00 2026-03-10T06:39:19.961 INFO:teuthology.orchestra.run.vm01.stdout:(29/136): ceph-mgr-dashboard-19.2.3-678.ge911bd 4.0 MB/s | 3.8 MB 00:00 2026-03-10T06:39:19.985 INFO:teuthology.orchestra.run.vm09.stdout:(29/136): ceph-mgr-dashboard-19.2.3-678.ge911bd 3.1 MB/s | 3.8 MB 00:01 2026-03-10T06:39:19.993 INFO:teuthology.orchestra.run.vm08.stdout:(29/136): ceph-mgr-dashboard-19.2.3-678.ge911bd 3.9 MB/s | 3.8 MB 00:00 2026-03-10T06:39:20.109 INFO:teuthology.orchestra.run.vm01.stdout:(30/136): ceph-mgr-modules-core-19.2.3-678.ge91 1.7 MB/s | 253 kB 00:00 2026-03-10T06:39:20.125 INFO:teuthology.orchestra.run.vm09.stdout:(30/136): ceph-mgr-modules-core-19.2.3-678.ge91 1.8 MB/s | 253 kB 00:00 2026-03-10T06:39:20.125 INFO:teuthology.orchestra.run.vm08.stdout:(30/136): ceph-mgr-modules-core-19.2.3-678.ge91 1.9 MB/s | 253 kB 00:00 2026-03-10T06:39:20.210 INFO:teuthology.orchestra.run.vm01.stdout:(31/136): ceph-mgr-diskprediction-local-19.2.3- 6.5 MB/s | 7.4 MB 00:01 2026-03-10T06:39:20.211 INFO:teuthology.orchestra.run.vm09.stdout:(31/136): ceph-mgr-diskprediction-local-19.2.3- 5.4 MB/s | 7.4 MB 00:01 2026-03-10T06:39:20.266 INFO:teuthology.orchestra.run.vm01.stdout:(32/136): ceph-mgr-rook-19.2.3-678.ge911bdeb.el 314 kB/s | 49 kB 00:00 2026-03-10T06:39:20.268 INFO:teuthology.orchestra.run.vm09.stdout:(32/136): ceph-mgr-rook-19.2.3-678.ge911bdeb.el 344 kB/s | 49 kB 00:00 2026-03-10T06:39:20.268 INFO:teuthology.orchestra.run.vm08.stdout:(31/136): ceph-mgr-rook-19.2.3-678.ge911bdeb.el 344 kB/s | 49 kB 00:00 2026-03-10T06:39:20.371 INFO:teuthology.orchestra.run.vm01.stdout:(33/136): ceph-prometheus-alerts-19.2.3-678.ge9 104 kB/s | 17 kB 00:00 2026-03-10T06:39:20.371 INFO:teuthology.orchestra.run.vm09.stdout:(33/136): ceph-prometheus-alerts-19.2.3-678.ge9 104 kB/s | 17 kB 00:00 2026-03-10T06:39:20.397 INFO:teuthology.orchestra.run.vm08.stdout:(32/136): ceph-prometheus-alerts-19.2.3-678.ge9 131 kB/s | 17 kB 00:00 2026-03-10T06:39:20.406 INFO:teuthology.orchestra.run.vm01.stdout:(34/136): ceph-volume-19.2.3-678.ge911bdeb.el9. 2.1 MB/s | 299 kB 00:00 2026-03-10T06:39:20.411 INFO:teuthology.orchestra.run.vm09.stdout:(34/136): ceph-volume-19.2.3-678.ge911bdeb.el9. 2.0 MB/s | 299 kB 00:00 2026-03-10T06:39:20.607 INFO:teuthology.orchestra.run.vm01.stdout:(35/136): cephadm-19.2.3-678.ge911bdeb.el9.noar 3.2 MB/s | 769 kB 00:00 2026-03-10T06:39:20.617 INFO:teuthology.orchestra.run.vm09.stdout:(35/136): cryptsetup-2.8.1-3.el9.x86_64.rpm 1.7 MB/s | 351 kB 00:00 2026-03-10T06:39:20.624 INFO:teuthology.orchestra.run.vm09.stdout:(36/136): cephadm-19.2.3-678.ge911bdeb.el9.noar 3.0 MB/s | 769 kB 00:00 2026-03-10T06:39:20.635 INFO:teuthology.orchestra.run.vm08.stdout:(33/136): ceph-volume-19.2.3-678.ge911bdeb.el9. 1.2 MB/s | 299 kB 00:00 2026-03-10T06:39:20.645 INFO:teuthology.orchestra.run.vm09.stdout:(37/136): ledmon-libs-1.1.0-3.el9.x86_64.rpm 1.4 MB/s | 40 kB 00:00 2026-03-10T06:39:20.656 INFO:teuthology.orchestra.run.vm01.stdout:(36/136): cryptsetup-2.8.1-3.el9.x86_64.rpm 1.4 MB/s | 351 kB 00:00 2026-03-10T06:39:20.671 INFO:teuthology.orchestra.run.vm08.stdout:(34/136): ceph-mgr-diskprediction-local-19.2.3- 4.6 MB/s | 7.4 MB 00:01 2026-03-10T06:39:20.706 INFO:teuthology.orchestra.run.vm01.stdout:(37/136): ledmon-libs-1.1.0-3.el9.x86_64.rpm 410 kB/s | 40 kB 00:00 2026-03-10T06:39:20.738 INFO:teuthology.orchestra.run.vm01.stdout:(38/136): libconfig-1.7.2-9.el9.x86_64.rpm 876 kB/s | 72 kB 00:00 2026-03-10T06:39:20.750 INFO:teuthology.orchestra.run.vm09.stdout:(38/136): libconfig-1.7.2-9.el9.x86_64.rpm 571 kB/s | 72 kB 00:00 2026-03-10T06:39:20.781 INFO:teuthology.orchestra.run.vm01.stdout:(39/136): libquadmath-11.5.0-14.el9.x86_64.rpm 4.2 MB/s | 184 kB 00:00 2026-03-10T06:39:20.909 INFO:teuthology.orchestra.run.vm01.stdout:(40/136): mailcap-2.1.49-5.el9.noarch.rpm 259 kB/s | 33 kB 00:00 2026-03-10T06:39:20.919 INFO:teuthology.orchestra.run.vm09.stdout:(39/136): libgfortran-11.5.0-14.el9.x86_64.rpm 2.8 MB/s | 794 kB 00:00 2026-03-10T06:39:20.932 INFO:teuthology.orchestra.run.vm08.stdout:(35/136): cryptsetup-2.8.1-3.el9.x86_64.rpm 1.3 MB/s | 351 kB 00:00 2026-03-10T06:39:20.933 INFO:teuthology.orchestra.run.vm09.stdout:(40/136): libquadmath-11.5.0-14.el9.x86_64.rpm 1.0 MB/s | 184 kB 00:00 2026-03-10T06:39:20.934 INFO:teuthology.orchestra.run.vm01.stdout:(41/136): libgfortran-11.5.0-14.el9.x86_64.rpm 3.4 MB/s | 794 kB 00:00 2026-03-10T06:39:20.936 INFO:teuthology.orchestra.run.vm08.stdout:(36/136): cephadm-19.2.3-678.ge911bdeb.el9.noar 2.5 MB/s | 769 kB 00:00 2026-03-10T06:39:20.946 INFO:teuthology.orchestra.run.vm09.stdout:(41/136): mailcap-2.1.49-5.el9.noarch.rpm 1.2 MB/s | 33 kB 00:00 2026-03-10T06:39:20.959 INFO:teuthology.orchestra.run.vm08.stdout:(37/136): ledmon-libs-1.1.0-3.el9.x86_64.rpm 1.5 MB/s | 40 kB 00:00 2026-03-10T06:39:20.966 INFO:teuthology.orchestra.run.vm01.stdout:(42/136): pciutils-3.7.0-7.el9.x86_64.rpm 1.6 MB/s | 93 kB 00:00 2026-03-10T06:39:20.972 INFO:teuthology.orchestra.run.vm09.stdout:(42/136): pciutils-3.7.0-7.el9.x86_64.rpm 2.4 MB/s | 93 kB 00:00 2026-03-10T06:39:20.988 INFO:teuthology.orchestra.run.vm09.stdout:(43/136): python3-cffi-1.14.5-5.el9.x86_64.rpm 5.9 MB/s | 253 kB 00:00 2026-03-10T06:39:20.994 INFO:teuthology.orchestra.run.vm01.stdout:(43/136): python3-cffi-1.14.5-5.el9.x86_64.rpm 4.1 MB/s | 253 kB 00:00 2026-03-10T06:39:21.016 INFO:teuthology.orchestra.run.vm09.stdout:(44/136): python3-ply-3.11-14.el9.noarch.rpm 3.8 MB/s | 106 kB 00:00 2026-03-10T06:39:21.033 INFO:teuthology.orchestra.run.vm08.stdout:(38/136): libgfortran-11.5.0-14.el9.x86_64.rpm 11 MB/s | 794 kB 00:00 2026-03-10T06:39:21.053 INFO:teuthology.orchestra.run.vm08.stdout:(39/136): libconfig-1.7.2-9.el9.x86_64.rpm 622 kB/s | 72 kB 00:00 2026-03-10T06:39:21.063 INFO:teuthology.orchestra.run.vm08.stdout:(40/136): libquadmath-11.5.0-14.el9.x86_64.rpm 6.2 MB/s | 184 kB 00:00 2026-03-10T06:39:21.087 INFO:teuthology.orchestra.run.vm08.stdout:(41/136): mailcap-2.1.49-5.el9.noarch.rpm 973 kB/s | 33 kB 00:00 2026-03-10T06:39:21.092 INFO:teuthology.orchestra.run.vm09.stdout:(45/136): python3-pycparser-2.20-6.el9.noarch.r 1.7 MB/s | 135 kB 00:00 2026-03-10T06:39:21.095 INFO:teuthology.orchestra.run.vm08.stdout:(42/136): pciutils-3.7.0-7.el9.x86_64.rpm 2.8 MB/s | 93 kB 00:00 2026-03-10T06:39:21.193 INFO:teuthology.orchestra.run.vm09.stdout:(46/136): python3-requests-2.25.1-10.el9.noarch 1.2 MB/s | 126 kB 00:00 2026-03-10T06:39:21.271 INFO:teuthology.orchestra.run.vm01.stdout:(44/136): python3-cryptography-36.0.1-5.el9.x86 4.1 MB/s | 1.2 MB 00:00 2026-03-10T06:39:21.272 INFO:teuthology.orchestra.run.vm08.stdout:(43/136): python3-cryptography-36.0.1-5.el9.x86 7.1 MB/s | 1.2 MB 00:00 2026-03-10T06:39:21.283 INFO:teuthology.orchestra.run.vm01.stdout:(45/136): python3-ply-3.11-14.el9.noarch.rpm 368 kB/s | 106 kB 00:00 2026-03-10T06:39:21.303 INFO:teuthology.orchestra.run.vm08.stdout:(44/136): python3-cffi-1.14.5-5.el9.x86_64.rpm 1.1 MB/s | 253 kB 00:00 2026-03-10T06:39:21.303 INFO:teuthology.orchestra.run.vm09.stdout:(47/136): python3-cryptography-36.0.1-5.el9.x86 3.8 MB/s | 1.2 MB 00:00 2026-03-10T06:39:21.330 INFO:teuthology.orchestra.run.vm09.stdout:(48/136): python3-urllib3-1.26.5-7.el9.noarch.r 1.6 MB/s | 218 kB 00:00 2026-03-10T06:39:21.330 INFO:teuthology.orchestra.run.vm08.stdout:(45/136): python3-ply-3.11-14.el9.noarch.rpm 1.8 MB/s | 106 kB 00:00 2026-03-10T06:39:21.340 INFO:teuthology.orchestra.run.vm01.stdout:(46/136): python3-pycparser-2.20-6.el9.noarch.r 1.9 MB/s | 135 kB 00:00 2026-03-10T06:39:21.346 INFO:teuthology.orchestra.run.vm08.stdout:(46/136): python3-pycparser-2.20-6.el9.noarch.r 3.1 MB/s | 135 kB 00:00 2026-03-10T06:39:21.354 INFO:teuthology.orchestra.run.vm09.stdout:(49/136): unzip-6.0-59.el9.x86_64.rpm 3.5 MB/s | 182 kB 00:00 2026-03-10T06:39:21.355 INFO:teuthology.orchestra.run.vm01.stdout:(47/136): python3-requests-2.25.1-10.el9.noarch 1.7 MB/s | 126 kB 00:00 2026-03-10T06:39:21.366 INFO:teuthology.orchestra.run.vm08.stdout:(47/136): python3-requests-2.25.1-10.el9.noarch 3.4 MB/s | 126 kB 00:00 2026-03-10T06:39:21.367 INFO:teuthology.orchestra.run.vm09.stdout:(50/136): zip-3.0-35.el9.x86_64.rpm 7.1 MB/s | 266 kB 00:00 2026-03-10T06:39:21.409 INFO:teuthology.orchestra.run.vm08.stdout:(48/136): python3-urllib3-1.26.5-7.el9.noarch.r 3.4 MB/s | 218 kB 00:00 2026-03-10T06:39:21.411 INFO:teuthology.orchestra.run.vm01.stdout:(48/136): python3-urllib3-1.26.5-7.el9.noarch.r 3.0 MB/s | 218 kB 00:00 2026-03-10T06:39:21.447 INFO:teuthology.orchestra.run.vm01.stdout:(49/136): unzip-6.0-59.el9.x86_64.rpm 2.0 MB/s | 182 kB 00:00 2026-03-10T06:39:21.472 INFO:teuthology.orchestra.run.vm08.stdout:(49/136): unzip-6.0-59.el9.x86_64.rpm 1.7 MB/s | 182 kB 00:00 2026-03-10T06:39:21.481 INFO:teuthology.orchestra.run.vm08.stdout:(50/136): zip-3.0-35.el9.x86_64.rpm 3.6 MB/s | 266 kB 00:00 2026-03-10T06:39:21.482 INFO:teuthology.orchestra.run.vm01.stdout:(50/136): zip-3.0-35.el9.x86_64.rpm 3.7 MB/s | 266 kB 00:00 2026-03-10T06:39:21.515 INFO:teuthology.orchestra.run.vm09.stdout:(51/136): flexiblas-3.0.4-9.el9.x86_64.rpm 200 kB/s | 30 kB 00:00 2026-03-10T06:39:21.638 INFO:teuthology.orchestra.run.vm01.stdout:(51/136): flexiblas-3.0.4-9.el9.x86_64.rpm 190 kB/s | 30 kB 00:00 2026-03-10T06:39:21.706 INFO:teuthology.orchestra.run.vm01.stdout:(52/136): boost-program-options-1.75.0-13.el9.x 402 kB/s | 104 kB 00:00 2026-03-10T06:39:21.781 INFO:teuthology.orchestra.run.vm01.stdout:(53/136): flexiblas-openblas-openmp-3.0.4-9.el9 199 kB/s | 15 kB 00:00 2026-03-10T06:39:21.781 INFO:teuthology.orchestra.run.vm09.stdout:(52/136): ceph-test-19.2.3-678.ge911bdeb.el9.x8 8.2 MB/s | 50 MB 00:06 2026-03-10T06:39:21.782 INFO:teuthology.orchestra.run.vm09.stdout:(53/136): boost-program-options-1.75.0-13.el9.x 244 kB/s | 104 kB 00:00 2026-03-10T06:39:21.854 INFO:teuthology.orchestra.run.vm08.stdout:(51/136): flexiblas-3.0.4-9.el9.x86_64.rpm 80 kB/s | 30 kB 00:00 2026-03-10T06:39:21.860 INFO:teuthology.orchestra.run.vm09.stdout:(54/136): flexiblas-openblas-openmp-3.0.4-9.el9 189 kB/s | 15 kB 00:00 2026-03-10T06:39:21.910 INFO:teuthology.orchestra.run.vm08.stdout:(52/136): boost-program-options-1.75.0-13.el9.x 238 kB/s | 104 kB 00:00 2026-03-10T06:39:21.936 INFO:teuthology.orchestra.run.vm01.stdout:(54/136): libnbd-1.20.3-4.el9.x86_64.rpm 1.0 MB/s | 164 kB 00:00 2026-03-10T06:39:21.958 INFO:teuthology.orchestra.run.vm09.stdout:(55/136): libpmemobj-1.12.1-1.el9.x86_64.rpm 1.6 MB/s | 160 kB 00:00 2026-03-10T06:39:22.036 INFO:teuthology.orchestra.run.vm08.stdout:(53/136): flexiblas-openblas-openmp-3.0.4-9.el9 118 kB/s | 15 kB 00:00 2026-03-10T06:39:22.037 INFO:teuthology.orchestra.run.vm01.stdout:(55/136): libpmemobj-1.12.1-1.el9.x86_64.rpm 1.6 MB/s | 160 kB 00:00 2026-03-10T06:39:22.037 INFO:teuthology.orchestra.run.vm09.stdout:(56/136): librabbitmq-0.11.0-7.el9.x86_64.rpm 577 kB/s | 45 kB 00:00 2026-03-10T06:39:22.045 INFO:teuthology.orchestra.run.vm09.stdout:(57/136): libnbd-1.20.3-4.el9.x86_64.rpm 623 kB/s | 164 kB 00:00 2026-03-10T06:39:22.088 INFO:teuthology.orchestra.run.vm01.stdout:(56/136): librabbitmq-0.11.0-7.el9.x86_64.rpm 891 kB/s | 45 kB 00:00 2026-03-10T06:39:22.140 INFO:teuthology.orchestra.run.vm09.stdout:(58/136): librdkafka-1.6.1-102.el9.x86_64.rpm 6.3 MB/s | 662 kB 00:00 2026-03-10T06:39:22.161 INFO:teuthology.orchestra.run.vm08.stdout:(54/136): flexiblas-netlib-3.0.4-9.el9.x86_64.r 9.7 MB/s | 3.0 MB 00:00 2026-03-10T06:39:22.192 INFO:teuthology.orchestra.run.vm09.stdout:(59/136): libxslt-1.1.34-12.el9.x86_64.rpm 4.4 MB/s | 233 kB 00:00 2026-03-10T06:39:22.194 INFO:teuthology.orchestra.run.vm08.stdout:(55/136): libnbd-1.20.3-4.el9.x86_64.rpm 1.0 MB/s | 164 kB 00:00 2026-03-10T06:39:22.211 INFO:teuthology.orchestra.run.vm08.stdout:(56/136): libpmemobj-1.12.1-1.el9.x86_64.rpm 3.2 MB/s | 160 kB 00:00 2026-03-10T06:39:22.241 INFO:teuthology.orchestra.run.vm09.stdout:(60/136): libstoragemgmt-1.10.1-1.el9.x86_64.rp 1.2 MB/s | 246 kB 00:00 2026-03-10T06:39:22.244 INFO:teuthology.orchestra.run.vm09.stdout:(61/136): lttng-ust-2.12.0-6.el9.x86_64.rpm 5.4 MB/s | 292 kB 00:00 2026-03-10T06:39:22.245 INFO:teuthology.orchestra.run.vm08.stdout:(57/136): librabbitmq-0.11.0-7.el9.x86_64.rpm 900 kB/s | 45 kB 00:00 2026-03-10T06:39:22.284 INFO:teuthology.orchestra.run.vm08.stdout:(58/136): librdkafka-1.6.1-102.el9.x86_64.rpm 8.9 MB/s | 662 kB 00:00 2026-03-10T06:39:22.295 INFO:teuthology.orchestra.run.vm09.stdout:(62/136): openblas-0.3.29-1.el9.x86_64.rpm 835 kB/s | 42 kB 00:00 2026-03-10T06:39:22.316 INFO:teuthology.orchestra.run.vm08.stdout:(59/136): libstoragemgmt-1.10.1-1.el9.x86_64.rp 3.4 MB/s | 246 kB 00:00 2026-03-10T06:39:22.334 INFO:teuthology.orchestra.run.vm01.stdout:(57/136): librdkafka-1.6.1-102.el9.x86_64.rpm 2.6 MB/s | 662 kB 00:00 2026-03-10T06:39:22.335 INFO:teuthology.orchestra.run.vm08.stdout:(60/136): libxslt-1.1.34-12.el9.x86_64.rpm 4.5 MB/s | 233 kB 00:00 2026-03-10T06:39:22.346 INFO:teuthology.orchestra.run.vm09.stdout:(63/136): flexiblas-netlib-3.0.4-9.el9.x86_64.r 3.6 MB/s | 3.0 MB 00:00 2026-03-10T06:39:22.368 INFO:teuthology.orchestra.run.vm08.stdout:(61/136): lttng-ust-2.12.0-6.el9.x86_64.rpm 5.6 MB/s | 292 kB 00:00 2026-03-10T06:39:22.484 INFO:teuthology.orchestra.run.vm09.stdout:(64/136): lua-5.4.4-4.el9.x86_64.rpm 775 kB/s | 188 kB 00:00 2026-03-10T06:39:22.549 INFO:teuthology.orchestra.run.vm08.stdout:(62/136): lua-5.4.4-4.el9.x86_64.rpm 880 kB/s | 188 kB 00:00 2026-03-10T06:39:22.617 INFO:teuthology.orchestra.run.vm08.stdout:(63/136): openblas-0.3.29-1.el9.x86_64.rpm 169 kB/s | 42 kB 00:00 2026-03-10T06:39:22.709 INFO:teuthology.orchestra.run.vm09.stdout:(65/136): protobuf-3.14.0-17.el9.x86_64.rpm 2.8 MB/s | 1.0 MB 00:00 2026-03-10T06:39:22.710 INFO:teuthology.orchestra.run.vm01.stdout:(58/136): ceph-test-19.2.3-678.ge911bdeb.el9.x8 7.7 MB/s | 50 MB 00:06 2026-03-10T06:39:22.712 INFO:teuthology.orchestra.run.vm01.stdout:(59/136): libstoragemgmt-1.10.1-1.el9.x86_64.rp 653 kB/s | 246 kB 00:00 2026-03-10T06:39:22.721 INFO:teuthology.orchestra.run.vm01.stdout:(60/136): flexiblas-netlib-3.0.4-9.el9.x86_64.r 2.8 MB/s | 3.0 MB 00:01 2026-03-10T06:39:22.874 INFO:teuthology.orchestra.run.vm01.stdout:(61/136): lua-5.4.4-4.el9.x86_64.rpm 1.2 MB/s | 188 kB 00:00 2026-03-10T06:39:22.942 INFO:teuthology.orchestra.run.vm09.stdout:(66/136): openblas-openmp-0.3.29-1.el9.x86_64.r 8.2 MB/s | 5.3 MB 00:00 2026-03-10T06:39:22.942 INFO:teuthology.orchestra.run.vm01.stdout:(62/136): lttng-ust-2.12.0-6.el9.x86_64.rpm 1.2 MB/s | 292 kB 00:00 2026-03-10T06:39:22.943 INFO:teuthology.orchestra.run.vm01.stdout:(63/136): openblas-0.3.29-1.el9.x86_64.rpm 620 kB/s | 42 kB 00:00 2026-03-10T06:39:22.943 INFO:teuthology.orchestra.run.vm09.stdout:(67/136): python3-devel-3.9.25-3.el9.x86_64.rpm 1.0 MB/s | 244 kB 00:00 2026-03-10T06:39:22.977 INFO:teuthology.orchestra.run.vm08.stdout:(64/136): openblas-openmp-0.3.29-1.el9.x86_64.r 12 MB/s | 5.3 MB 00:00 2026-03-10T06:39:22.994 INFO:teuthology.orchestra.run.vm09.stdout:(68/136): python3-jmespath-1.0.1-1.el9.noarch.r 933 kB/s | 48 kB 00:00 2026-03-10T06:39:23.013 INFO:teuthology.orchestra.run.vm01.stdout:(64/136): libxslt-1.1.34-12.el9.x86_64.rpm 770 kB/s | 233 kB 00:00 2026-03-10T06:39:23.032 INFO:teuthology.orchestra.run.vm08.stdout:(65/136): protobuf-3.14.0-17.el9.x86_64.rpm 2.4 MB/s | 1.0 MB 00:00 2026-03-10T06:39:23.089 INFO:teuthology.orchestra.run.vm09.stdout:(69/136): python3-jinja2-2.11.3-8.el9.noarch.rp 1.6 MB/s | 249 kB 00:00 2026-03-10T06:39:23.118 INFO:teuthology.orchestra.run.vm08.stdout:(66/136): python3-devel-3.9.25-3.el9.x86_64.rpm 2.8 MB/s | 244 kB 00:00 2026-03-10T06:39:23.141 INFO:teuthology.orchestra.run.vm09.stdout:(70/136): python3-libstoragemgmt-1.10.1-1.el9.x 1.2 MB/s | 177 kB 00:00 2026-03-10T06:39:23.202 INFO:teuthology.orchestra.run.vm09.stdout:(71/136): python3-markupsafe-1.1.1-12.el9.x86_6 572 kB/s | 35 kB 00:00 2026-03-10T06:39:23.209 INFO:teuthology.orchestra.run.vm09.stdout:(72/136): python3-mako-1.1.4-6.el9.noarch.rpm 1.4 MB/s | 172 kB 00:00 2026-03-10T06:39:23.428 INFO:teuthology.orchestra.run.vm08.stdout:(67/136): python3-babel-2.9.1-2.el9.noarch.rpm 13 MB/s | 6.0 MB 00:00 2026-03-10T06:39:23.429 INFO:teuthology.orchestra.run.vm08.stdout:(68/136): python3-jinja2-2.11.3-8.el9.noarch.rp 800 kB/s | 249 kB 00:00 2026-03-10T06:39:23.476 INFO:teuthology.orchestra.run.vm08.stdout:(69/136): python3-jmespath-1.0.1-1.el9.noarch.r 993 kB/s | 48 kB 00:00 2026-03-10T06:39:23.479 INFO:teuthology.orchestra.run.vm08.stdout:(70/136): python3-libstoragemgmt-1.10.1-1.el9.x 3.5 MB/s | 177 kB 00:00 2026-03-10T06:39:23.504 INFO:teuthology.orchestra.run.vm09.stdout:(73/136): python3-numpy-f2py-1.23.5-2.el9.x86_6 1.5 MB/s | 442 kB 00:00 2026-03-10T06:39:23.527 INFO:teuthology.orchestra.run.vm08.stdout:(71/136): python3-mako-1.1.4-6.el9.noarch.rpm 3.4 MB/s | 172 kB 00:00 2026-03-10T06:39:23.527 INFO:teuthology.orchestra.run.vm08.stdout:(72/136): python3-markupsafe-1.1.1-12.el9.x86_6 720 kB/s | 35 kB 00:00 2026-03-10T06:39:23.580 INFO:teuthology.orchestra.run.vm08.stdout:(73/136): python3-numpy-f2py-1.23.5-2.el9.x86_6 8.3 MB/s | 442 kB 00:00 2026-03-10T06:39:23.602 INFO:teuthology.orchestra.run.vm09.stdout:(74/136): python3-packaging-20.9-5.el9.noarch.r 787 kB/s | 77 kB 00:00 2026-03-10T06:39:23.625 INFO:teuthology.orchestra.run.vm01.stdout:(65/136): protobuf-3.14.0-17.el9.x86_64.rpm 1.5 MB/s | 1.0 MB 00:00 2026-03-10T06:39:23.646 INFO:teuthology.orchestra.run.vm08.stdout:(74/136): python3-packaging-20.9-5.el9.noarch.r 1.2 MB/s | 77 kB 00:00 2026-03-10T06:39:23.800 INFO:teuthology.orchestra.run.vm09.stdout:(75/136): python3-protobuf-3.14.0-17.el9.noarch 1.3 MB/s | 267 kB 00:00 2026-03-10T06:39:23.810 INFO:teuthology.orchestra.run.vm08.stdout:(75/136): python3-protobuf-3.14.0-17.el9.noarch 1.6 MB/s | 267 kB 00:00 2026-03-10T06:39:23.840 INFO:teuthology.orchestra.run.vm01.stdout:(66/136): python3-babel-2.9.1-2.el9.noarch.rpm 7.2 MB/s | 6.0 MB 00:00 2026-03-10T06:39:23.951 INFO:teuthology.orchestra.run.vm01.stdout:(67/136): python3-jinja2-2.11.3-8.el9.noarch.rp 2.2 MB/s | 249 kB 00:00 2026-03-10T06:39:23.951 INFO:teuthology.orchestra.run.vm09.stdout:(76/136): python3-pyasn1-0.4.8-7.el9.noarch.rpm 1.0 MB/s | 157 kB 00:00 2026-03-10T06:39:23.953 INFO:teuthology.orchestra.run.vm01.stdout:(68/136): python3-devel-3.9.25-3.el9.x86_64.rpm 747 kB/s | 244 kB 00:00 2026-03-10T06:39:23.981 INFO:teuthology.orchestra.run.vm08.stdout:(76/136): python3-numpy-1.23.5-2.el9.x86_64.rpm 14 MB/s | 6.1 MB 00:00 2026-03-10T06:39:23.982 INFO:teuthology.orchestra.run.vm08.stdout:(77/136): python3-pyasn1-0.4.8-7.el9.noarch.rpm 917 kB/s | 157 kB 00:00 2026-03-10T06:39:24.005 INFO:teuthology.orchestra.run.vm01.stdout:(69/136): python3-jmespath-1.0.1-1.el9.noarch.r 882 kB/s | 48 kB 00:00 2026-03-10T06:39:24.032 INFO:teuthology.orchestra.run.vm08.stdout:(78/136): python3-pyasn1-modules-0.4.8-7.el9.no 5.3 MB/s | 277 kB 00:00 2026-03-10T06:39:24.033 INFO:teuthology.orchestra.run.vm08.stdout:(79/136): python3-requests-oauthlib-1.3.0-12.el 1.0 MB/s | 54 kB 00:00 2026-03-10T06:39:24.058 INFO:teuthology.orchestra.run.vm01.stdout:(70/136): python3-mako-1.1.4-6.el9.noarch.rpm 3.2 MB/s | 172 kB 00:00 2026-03-10T06:39:24.081 INFO:teuthology.orchestra.run.vm08.stdout:(80/136): python3-toml-0.10.2-6.el9.noarch.rpm 869 kB/s | 42 kB 00:00 2026-03-10T06:39:24.112 INFO:teuthology.orchestra.run.vm01.stdout:(71/136): python3-markupsafe-1.1.1-12.el9.x86_6 649 kB/s | 35 kB 00:00 2026-03-10T06:39:24.132 INFO:teuthology.orchestra.run.vm08.stdout:(81/136): qatlib-25.08.0-2.el9.x86_64.rpm 4.6 MB/s | 240 kB 00:00 2026-03-10T06:39:24.158 INFO:teuthology.orchestra.run.vm01.stdout:(72/136): python3-libstoragemgmt-1.10.1-1.el9.x 864 kB/s | 177 kB 00:00 2026-03-10T06:39:24.180 INFO:teuthology.orchestra.run.vm08.stdout:(82/136): qatlib-service-25.08.0-2.el9.x86_64.r 775 kB/s | 37 kB 00:00 2026-03-10T06:39:24.197 INFO:teuthology.orchestra.run.vm09.stdout:(77/136): python3-pyasn1-modules-0.4.8-7.el9.no 1.1 MB/s | 277 kB 00:00 2026-03-10T06:39:24.248 INFO:teuthology.orchestra.run.vm09.stdout:(78/136): python3-requests-oauthlib-1.3.0-12.el 1.0 MB/s | 54 kB 00:00 2026-03-10T06:39:24.500 INFO:teuthology.orchestra.run.vm08.stdout:(83/136): ceph-test-19.2.3-678.ge911bdeb.el9.x8 6.3 MB/s | 50 MB 00:07 2026-03-10T06:39:24.585 INFO:teuthology.orchestra.run.vm08.stdout:(84/136): qatzip-libs-1.3.1-1.el9.x86_64.rpm 164 kB/s | 66 kB 00:00 2026-03-10T06:39:24.822 INFO:teuthology.orchestra.run.vm08.stdout:(85/136): socat-1.7.4.1-8.el9.x86_64.rpm 946 kB/s | 303 kB 00:00 2026-03-10T06:39:24.849 INFO:teuthology.orchestra.run.vm01.stdout:(73/136): python3-numpy-f2py-1.23.5-2.el9.x86_6 640 kB/s | 442 kB 00:00 2026-03-10T06:39:24.928 INFO:teuthology.orchestra.run.vm08.stdout:(86/136): xmlstarlet-1.6.1-20.el9.x86_64.rpm 186 kB/s | 64 kB 00:00 2026-03-10T06:39:24.931 INFO:teuthology.orchestra.run.vm08.stdout:(87/136): lua-devel-5.4.4-4.el9.x86_64.rpm 205 kB/s | 22 kB 00:00 2026-03-10T06:39:25.008 INFO:teuthology.orchestra.run.vm01.stdout:(74/136): python3-packaging-20.9-5.el9.noarch.r 484 kB/s | 77 kB 00:00 2026-03-10T06:39:25.208 INFO:teuthology.orchestra.run.vm08.stdout:(88/136): abseil-cpp-20211102.0-4.el9.x86_64.rp 1.9 MB/s | 551 kB 00:00 2026-03-10T06:39:25.317 INFO:teuthology.orchestra.run.vm08.stdout:(89/136): python3-scipy-1.9.3-2.el9.x86_64.rpm 15 MB/s | 19 MB 00:01 2026-03-10T06:39:25.321 INFO:teuthology.orchestra.run.vm08.stdout:(90/136): gperftools-libs-2.9.1-3.el9.x86_64.rp 2.7 MB/s | 308 kB 00:00 2026-03-10T06:39:25.450 INFO:teuthology.orchestra.run.vm08.stdout:(91/136): protobuf-compiler-3.14.0-17.el9.x86_6 1.6 MB/s | 862 kB 00:00 2026-03-10T06:39:25.464 INFO:teuthology.orchestra.run.vm01.stdout:(75/136): python3-protobuf-3.14.0-17.el9.noarch 587 kB/s | 267 kB 00:00 2026-03-10T06:39:25.525 INFO:teuthology.orchestra.run.vm08.stdout:(92/136): grpc-data-1.46.7-10.el9.noarch.rpm 94 kB/s | 19 kB 00:00 2026-03-10T06:39:25.573 INFO:teuthology.orchestra.run.vm08.stdout:(93/136): libarrow-9.0.0-15.el9.x86_64.rpm 18 MB/s | 4.4 MB 00:00 2026-03-10T06:39:25.574 INFO:teuthology.orchestra.run.vm08.stdout:(94/136): libarrow-doc-9.0.0-15.el9.noarch.rpm 201 kB/s | 25 kB 00:00 2026-03-10T06:39:25.624 INFO:teuthology.orchestra.run.vm08.stdout:(95/136): liboath-2.6.12-1.el9.x86_64.rpm 500 kB/s | 49 kB 00:00 2026-03-10T06:39:25.629 INFO:teuthology.orchestra.run.vm08.stdout:(96/136): libunwind-1.6.2-1.el9.x86_64.rpm 1.2 MB/s | 67 kB 00:00 2026-03-10T06:39:25.718 INFO:teuthology.orchestra.run.vm01.stdout:(76/136): python3-pyasn1-0.4.8-7.el9.noarch.rpm 621 kB/s | 157 kB 00:00 2026-03-10T06:39:25.770 INFO:teuthology.orchestra.run.vm08.stdout:(97/136): luarocks-3.9.2-5.el9.noarch.rpm 773 kB/s | 151 kB 00:00 2026-03-10T06:39:25.869 INFO:teuthology.orchestra.run.vm08.stdout:(98/136): parquet-libs-9.0.0-15.el9.x86_64.rpm 3.3 MB/s | 838 kB 00:00 2026-03-10T06:39:25.968 INFO:teuthology.orchestra.run.vm08.stdout:(99/136): python3-asyncssh-2.13.2-5.el9.noarch. 1.6 MB/s | 548 kB 00:00 2026-03-10T06:39:25.973 INFO:teuthology.orchestra.run.vm08.stdout:(100/136): python3-autocommand-2.2.2-8.el9.noar 145 kB/s | 29 kB 00:00 2026-03-10T06:39:25.981 INFO:teuthology.orchestra.run.vm08.stdout:(101/136): python3-backports-tarfile-1.2.0-1.el 541 kB/s | 60 kB 00:00 2026-03-10T06:39:26.022 INFO:teuthology.orchestra.run.vm08.stdout:(102/136): python3-bcrypt-3.2.2-1.el9.x86_64.rp 805 kB/s | 43 kB 00:00 2026-03-10T06:39:26.030 INFO:teuthology.orchestra.run.vm08.stdout:(103/136): python3-certifi-2023.05.07-4.el9.noa 286 kB/s | 14 kB 00:00 2026-03-10T06:39:26.079 INFO:teuthology.orchestra.run.vm08.stdout:(104/136): python3-cheroot-10.0.1-4.el9.noarch. 3.0 MB/s | 173 kB 00:00 2026-03-10T06:39:26.083 INFO:teuthology.orchestra.run.vm08.stdout:(105/136): python3-cachetools-4.2.4-1.el9.noarc 292 kB/s | 32 kB 00:00 2026-03-10T06:39:26.094 INFO:teuthology.orchestra.run.vm08.stdout:(106/136): python3-cherrypy-18.6.1-2.el9.noarch 5.5 MB/s | 358 kB 00:00 2026-03-10T06:39:26.136 INFO:teuthology.orchestra.run.vm01.stdout:(77/136): python3-pyasn1-modules-0.4.8-7.el9.no 664 kB/s | 277 kB 00:00 2026-03-10T06:39:26.138 INFO:teuthology.orchestra.run.vm08.stdout:(107/136): python3-google-auth-2.45.0-1.el9.noa 4.2 MB/s | 254 kB 00:00 2026-03-10T06:39:26.182 INFO:teuthology.orchestra.run.vm08.stdout:(108/136): python3-grpcio-tools-1.46.7-10.el9.x 1.6 MB/s | 144 kB 00:00 2026-03-10T06:39:26.191 INFO:teuthology.orchestra.run.vm08.stdout:(109/136): python3-jaraco-8.2.1-3.el9.noarch.rp 204 kB/s | 11 kB 00:00 2026-03-10T06:39:26.234 INFO:teuthology.orchestra.run.vm08.stdout:(110/136): python3-jaraco-classes-3.2.1-5.el9.n 343 kB/s | 18 kB 00:00 2026-03-10T06:39:26.253 INFO:teuthology.orchestra.run.vm01.stdout:(78/136): python3-requests-oauthlib-1.3.0-12.el 459 kB/s | 54 kB 00:00 2026-03-10T06:39:26.256 INFO:teuthology.orchestra.run.vm08.stdout:(111/136): python3-jaraco-collections-3.0.0-8.e 358 kB/s | 23 kB 00:00 2026-03-10T06:39:26.310 INFO:teuthology.orchestra.run.vm08.stdout:(112/136): python3-grpcio-1.46.7-10.el9.x86_64. 9.0 MB/s | 2.0 MB 00:00 2026-03-10T06:39:26.311 INFO:teuthology.orchestra.run.vm08.stdout:(113/136): python3-jaraco-context-6.0.1-3.el9.n 254 kB/s | 20 kB 00:00 2026-03-10T06:39:26.311 INFO:teuthology.orchestra.run.vm08.stdout:(114/136): python3-jaraco-functools-3.5.0-2.el9 350 kB/s | 19 kB 00:00 2026-03-10T06:39:26.361 INFO:teuthology.orchestra.run.vm08.stdout:(115/136): python3-logutils-0.3.5-21.el9.noarch 930 kB/s | 46 kB 00:00 2026-03-10T06:39:26.363 INFO:teuthology.orchestra.run.vm08.stdout:(116/136): python3-jaraco-text-4.0.0-2.el9.noar 503 kB/s | 26 kB 00:00 2026-03-10T06:39:26.430 INFO:teuthology.orchestra.run.vm08.stdout:(117/136): python3-more-itertools-8.12.0-2.el9. 1.1 MB/s | 79 kB 00:00 2026-03-10T06:39:26.452 INFO:teuthology.orchestra.run.vm08.stdout:(118/136): python3-natsort-7.1.1-5.el9.noarch.r 652 kB/s | 58 kB 00:00 2026-03-10T06:39:26.506 INFO:teuthology.orchestra.run.vm08.stdout:(119/136): python3-pecan-1.4.2-3.el9.noarch.rpm 3.5 MB/s | 272 kB 00:00 2026-03-10T06:39:26.519 INFO:teuthology.orchestra.run.vm08.stdout:(120/136): python3-portend-3.1.0-2.el9.noarch.r 245 kB/s | 16 kB 00:00 2026-03-10T06:39:26.560 INFO:teuthology.orchestra.run.vm08.stdout:(121/136): python3-pyOpenSSL-21.0.0-1.el9.noarc 1.6 MB/s | 90 kB 00:00 2026-03-10T06:39:26.567 INFO:teuthology.orchestra.run.vm08.stdout:(122/136): python3-repoze-lru-0.7-16.el9.noarch 639 kB/s | 31 kB 00:00 2026-03-10T06:39:26.629 INFO:teuthology.orchestra.run.vm08.stdout:(123/136): python3-routes-2.5.1-5.el9.noarch.rp 2.7 MB/s | 188 kB 00:00 2026-03-10T06:39:26.630 INFO:teuthology.orchestra.run.vm08.stdout:(124/136): python3-rsa-4.9-2.el9.noarch.rpm 951 kB/s | 59 kB 00:00 2026-03-10T06:39:26.681 INFO:teuthology.orchestra.run.vm08.stdout:(125/136): python3-tempora-5.0.0-2.el9.noarch.r 692 kB/s | 36 kB 00:00 2026-03-10T06:39:26.693 INFO:teuthology.orchestra.run.vm08.stdout:(126/136): python3-typing-extensions-4.15.0-1.e 1.3 MB/s | 86 kB 00:00 2026-03-10T06:39:26.728 INFO:teuthology.orchestra.run.vm08.stdout:(127/136): python3-kubernetes-26.1.0-3.el9.noar 2.5 MB/s | 1.0 MB 00:00 2026-03-10T06:39:26.734 INFO:teuthology.orchestra.run.vm08.stdout:(128/136): python3-webob-1.8.8-2.el9.noarch.rpm 4.3 MB/s | 230 kB 00:00 2026-03-10T06:39:26.773 INFO:teuthology.orchestra.run.vm08.stdout:(129/136): python3-websocket-client-1.2.3-2.el9 1.1 MB/s | 90 kB 00:00 2026-03-10T06:39:26.784 INFO:teuthology.orchestra.run.vm08.stdout:(130/136): python3-werkzeug-2.0.3-3.el9.1.noarc 7.5 MB/s | 427 kB 00:00 2026-03-10T06:39:26.786 INFO:teuthology.orchestra.run.vm08.stdout:(131/136): python3-xmltodict-0.12.0-15.el9.noar 422 kB/s | 22 kB 00:00 2026-03-10T06:39:26.826 INFO:teuthology.orchestra.run.vm08.stdout:(132/136): python3-zc-lockfile-2.0-10.el9.noarc 375 kB/s | 20 kB 00:00 2026-03-10T06:39:26.920 INFO:teuthology.orchestra.run.vm08.stdout:(133/136): thrift-0.15.0-4.el9.x86_64.rpm 12 MB/s | 1.6 MB 00:00 2026-03-10T06:39:26.943 INFO:teuthology.orchestra.run.vm08.stdout:(134/136): re2-20211101-20.el9.x86_64.rpm 1.2 MB/s | 191 kB 00:00 2026-03-10T06:39:27.476 INFO:teuthology.orchestra.run.vm01.stdout:(79/136): python3-numpy-1.23.5-2.el9.x86_64.rpm 1.8 MB/s | 6.1 MB 00:03 2026-03-10T06:39:27.621 INFO:teuthology.orchestra.run.vm01.stdout:(80/136): python3-toml-0.10.2-6.el9.noarch.rpm 287 kB/s | 42 kB 00:00 2026-03-10T06:39:27.970 INFO:teuthology.orchestra.run.vm01.stdout:(81/136): qatlib-25.08.0-2.el9.x86_64.rpm 689 kB/s | 240 kB 00:00 2026-03-10T06:39:28.021 INFO:teuthology.orchestra.run.vm01.stdout:(82/136): qatlib-service-25.08.0-2.el9.x86_64.r 732 kB/s | 37 kB 00:00 2026-03-10T06:39:28.068 INFO:teuthology.orchestra.run.vm01.stdout:(83/136): openblas-openmp-0.3.29-1.el9.x86_64.r 1.0 MB/s | 5.3 MB 00:05 2026-03-10T06:39:28.121 INFO:teuthology.orchestra.run.vm01.stdout:(84/136): qatzip-libs-1.3.1-1.el9.x86_64.rpm 663 kB/s | 66 kB 00:00 2026-03-10T06:39:28.166 INFO:teuthology.orchestra.run.vm08.stdout:(135/136): librados2-19.2.3-678.ge911bdeb.el9.x 2.6 MB/s | 3.4 MB 00:01 2026-03-10T06:39:28.221 INFO:teuthology.orchestra.run.vm01.stdout:(85/136): xmlstarlet-1.6.1-20.el9.x86_64.rpm 638 kB/s | 64 kB 00:00 2026-03-10T06:39:28.326 INFO:teuthology.orchestra.run.vm08.stdout:(136/136): librbd1-19.2.3-678.ge911bdeb.el9.x86 2.3 MB/s | 3.2 MB 00:01 2026-03-10T06:39:28.330 INFO:teuthology.orchestra.run.vm08.stdout:-------------------------------------------------------------------------------- 2026-03-10T06:39:28.330 INFO:teuthology.orchestra.run.vm08.stdout:Total 7.2 MB/s | 210 MB 00:29 2026-03-10T06:39:28.340 INFO:teuthology.orchestra.run.vm01.stdout:(86/136): lua-devel-5.4.4-4.el9.x86_64.rpm 188 kB/s | 22 kB 00:00 2026-03-10T06:39:28.455 INFO:teuthology.orchestra.run.vm01.stdout:(87/136): socat-1.7.4.1-8.el9.x86_64.rpm 784 kB/s | 303 kB 00:00 2026-03-10T06:39:28.505 INFO:teuthology.orchestra.run.vm01.stdout:(88/136): protobuf-compiler-3.14.0-17.el9.x86_6 5.1 MB/s | 862 kB 00:00 2026-03-10T06:39:28.784 INFO:teuthology.orchestra.run.vm01.stdout:(89/136): abseil-cpp-20211102.0-4.el9.x86_64.rp 1.6 MB/s | 551 kB 00:00 2026-03-10T06:39:28.800 INFO:teuthology.orchestra.run.vm01.stdout:(90/136): gperftools-libs-2.9.1-3.el9.x86_64.rp 1.0 MB/s | 308 kB 00:00 2026-03-10T06:39:28.842 INFO:teuthology.orchestra.run.vm01.stdout:(91/136): grpc-data-1.46.7-10.el9.noarch.rpm 333 kB/s | 19 kB 00:00 2026-03-10T06:39:28.845 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction check 2026-03-10T06:39:28.894 INFO:teuthology.orchestra.run.vm08.stdout:Transaction check succeeded. 2026-03-10T06:39:28.894 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction test 2026-03-10T06:39:28.895 INFO:teuthology.orchestra.run.vm01.stdout:(92/136): libarrow-doc-9.0.0-15.el9.noarch.rpm 471 kB/s | 25 kB 00:00 2026-03-10T06:39:28.970 INFO:teuthology.orchestra.run.vm01.stdout:(93/136): liboath-2.6.12-1.el9.x86_64.rpm 653 kB/s | 49 kB 00:00 2026-03-10T06:39:29.025 INFO:teuthology.orchestra.run.vm01.stdout:(94/136): libunwind-1.6.2-1.el9.x86_64.rpm 1.2 MB/s | 67 kB 00:00 2026-03-10T06:39:29.054 INFO:teuthology.orchestra.run.vm01.stdout:(95/136): libarrow-9.0.0-15.el9.x86_64.rpm 17 MB/s | 4.4 MB 00:00 2026-03-10T06:39:29.097 INFO:teuthology.orchestra.run.vm01.stdout:(96/136): luarocks-3.9.2-5.el9.noarch.rpm 2.0 MB/s | 151 kB 00:00 2026-03-10T06:39:29.173 INFO:teuthology.orchestra.run.vm01.stdout:(97/136): parquet-libs-9.0.0-15.el9.x86_64.rpm 6.9 MB/s | 838 kB 00:00 2026-03-10T06:39:29.223 INFO:teuthology.orchestra.run.vm01.stdout:(98/136): python3-asyncssh-2.13.2-5.el9.noarch. 4.3 MB/s | 548 kB 00:00 2026-03-10T06:39:29.224 INFO:teuthology.orchestra.run.vm01.stdout:(99/136): python3-autocommand-2.2.2-8.el9.noarc 583 kB/s | 29 kB 00:00 2026-03-10T06:39:29.278 INFO:teuthology.orchestra.run.vm01.stdout:(100/136): python3-backports-tarfile-1.2.0-1.el 1.1 MB/s | 60 kB 00:00 2026-03-10T06:39:29.293 INFO:teuthology.orchestra.run.vm01.stdout:(101/136): python3-bcrypt-3.2.2-1.el9.x86_64.rp 634 kB/s | 43 kB 00:00 2026-03-10T06:39:29.328 INFO:teuthology.orchestra.run.vm01.stdout:(102/136): python3-cachetools-4.2.4-1.el9.noarc 651 kB/s | 32 kB 00:00 2026-03-10T06:39:29.347 INFO:teuthology.orchestra.run.vm01.stdout:(103/136): python3-certifi-2023.05.07-4.el9.noa 260 kB/s | 14 kB 00:00 2026-03-10T06:39:29.388 INFO:teuthology.orchestra.run.vm01.stdout:(104/136): python3-cheroot-10.0.1-4.el9.noarch. 2.8 MB/s | 173 kB 00:00 2026-03-10T06:39:29.424 INFO:teuthology.orchestra.run.vm01.stdout:(105/136): python3-cherrypy-18.6.1-2.el9.noarch 4.6 MB/s | 358 kB 00:00 2026-03-10T06:39:29.463 INFO:teuthology.orchestra.run.vm01.stdout:(106/136): python3-google-auth-2.45.0-1.el9.noa 3.3 MB/s | 254 kB 00:00 2026-03-10T06:39:29.523 INFO:teuthology.orchestra.run.vm01.stdout:(107/136): python3-grpcio-tools-1.46.7-10.el9.x 2.4 MB/s | 144 kB 00:00 2026-03-10T06:39:29.581 INFO:teuthology.orchestra.run.vm01.stdout:(108/136): python3-jaraco-8.2.1-3.el9.noarch.rp 187 kB/s | 11 kB 00:00 2026-03-10T06:39:29.638 INFO:teuthology.orchestra.run.vm01.stdout:(109/136): python3-jaraco-classes-3.2.1-5.el9.n 310 kB/s | 18 kB 00:00 2026-03-10T06:39:29.684 INFO:teuthology.orchestra.run.vm01.stdout:(110/136): python3-jaraco-collections-3.0.0-8.e 507 kB/s | 23 kB 00:00 2026-03-10T06:39:29.731 INFO:teuthology.orchestra.run.vm01.stdout:(111/136): python3-jaraco-context-6.0.1-3.el9.n 422 kB/s | 20 kB 00:00 2026-03-10T06:39:29.732 INFO:teuthology.orchestra.run.vm08.stdout:Transaction test succeeded. 2026-03-10T06:39:29.733 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction 2026-03-10T06:39:29.815 INFO:teuthology.orchestra.run.vm01.stdout:(112/136): python3-jaraco-functools-3.5.0-2.el9 230 kB/s | 19 kB 00:00 2026-03-10T06:39:29.896 INFO:teuthology.orchestra.run.vm01.stdout:(113/136): python3-jaraco-text-4.0.0-2.el9.noar 330 kB/s | 26 kB 00:00 2026-03-10T06:39:30.047 INFO:teuthology.orchestra.run.vm01.stdout:(114/136): python3-kubernetes-26.1.0-3.el9.noar 6.7 MB/s | 1.0 MB 00:00 2026-03-10T06:39:30.094 INFO:teuthology.orchestra.run.vm01.stdout:(115/136): python3-grpcio-1.46.7-10.el9.x86_64. 3.0 MB/s | 2.0 MB 00:00 2026-03-10T06:39:30.095 INFO:teuthology.orchestra.run.vm01.stdout:(116/136): python3-logutils-0.3.5-21.el9.noarch 958 kB/s | 46 kB 00:00 2026-03-10T06:39:30.150 INFO:teuthology.orchestra.run.vm01.stdout:(117/136): python3-natsort-7.1.1-5.el9.noarch.r 1.0 MB/s | 58 kB 00:00 2026-03-10T06:39:30.151 INFO:teuthology.orchestra.run.vm01.stdout:(118/136): python3-more-itertools-8.12.0-2.el9. 1.4 MB/s | 79 kB 00:00 2026-03-10T06:39:30.205 INFO:teuthology.orchestra.run.vm01.stdout:(119/136): python3-portend-3.1.0-2.el9.noarch.r 308 kB/s | 16 kB 00:00 2026-03-10T06:39:30.223 INFO:teuthology.orchestra.run.vm01.stdout:(120/136): python3-pecan-1.4.2-3.el9.noarch.rpm 3.7 MB/s | 272 kB 00:00 2026-03-10T06:39:30.262 INFO:teuthology.orchestra.run.vm01.stdout:(121/136): python3-pyOpenSSL-21.0.0-1.el9.noarc 1.5 MB/s | 90 kB 00:00 2026-03-10T06:39:30.274 INFO:teuthology.orchestra.run.vm01.stdout:(122/136): python3-repoze-lru-0.7-16.el9.noarch 606 kB/s | 31 kB 00:00 2026-03-10T06:39:30.326 INFO:teuthology.orchestra.run.vm01.stdout:(123/136): python3-rsa-4.9-2.el9.noarch.rpm 1.1 MB/s | 59 kB 00:00 2026-03-10T06:39:30.334 INFO:teuthology.orchestra.run.vm01.stdout:(124/136): python3-routes-2.5.1-5.el9.noarch.rp 2.6 MB/s | 188 kB 00:00 2026-03-10T06:39:30.383 INFO:teuthology.orchestra.run.vm01.stdout:(125/136): python3-tempora-5.0.0-2.el9.noarch.r 626 kB/s | 36 kB 00:00 2026-03-10T06:39:30.392 INFO:teuthology.orchestra.run.vm01.stdout:(126/136): python3-typing-extensions-4.15.0-1.e 1.4 MB/s | 86 kB 00:00 2026-03-10T06:39:30.451 INFO:teuthology.orchestra.run.vm01.stdout:(127/136): python3-websocket-client-1.2.3-2.el9 1.5 MB/s | 90 kB 00:00 2026-03-10T06:39:30.460 INFO:teuthology.orchestra.run.vm01.stdout:(128/136): python3-webob-1.8.8-2.el9.noarch.rpm 2.9 MB/s | 230 kB 00:00 2026-03-10T06:39:30.508 INFO:teuthology.orchestra.run.vm01.stdout:(129/136): python3-xmltodict-0.12.0-15.el9.noar 473 kB/s | 22 kB 00:00 2026-03-10T06:39:30.539 INFO:teuthology.orchestra.run.vm01.stdout:(130/136): python3-werkzeug-2.0.3-3.el9.1.noarc 4.8 MB/s | 427 kB 00:00 2026-03-10T06:39:30.556 INFO:teuthology.orchestra.run.vm01.stdout:(131/136): python3-zc-lockfile-2.0-10.el9.noarc 414 kB/s | 20 kB 00:00 2026-03-10T06:39:30.641 INFO:teuthology.orchestra.run.vm01.stdout:(132/136): re2-20211101-20.el9.x86_64.rpm 1.8 MB/s | 191 kB 00:00 2026-03-10T06:39:30.754 INFO:teuthology.orchestra.run.vm09.stdout:(79/136): python3-babel-2.9.1-2.el9.noarch.rpm 738 kB/s | 6.0 MB 00:08 2026-03-10T06:39:30.854 INFO:teuthology.orchestra.run.vm09.stdout:(80/136): python3-toml-0.10.2-6.el9.noarch.rpm 421 kB/s | 42 kB 00:00 2026-03-10T06:39:30.920 INFO:teuthology.orchestra.run.vm09.stdout:(81/136): python3-numpy-1.23.5-2.el9.x86_64.rpm 813 kB/s | 6.1 MB 00:07 2026-03-10T06:39:30.920 INFO:teuthology.orchestra.run.vm01.stdout:(133/136): thrift-0.15.0-4.el9.x86_64.rpm 4.4 MB/s | 1.6 MB 00:00 2026-03-10T06:39:30.952 INFO:teuthology.orchestra.run.vm08.stdout: Preparing : 1/1 2026-03-10T06:39:30.967 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-more-itertools-8.12.0-2.el9.noarch 1/138 2026-03-10T06:39:30.970 INFO:teuthology.orchestra.run.vm09.stdout:(82/136): qatlib-service-25.08.0-2.el9.x86_64.r 742 kB/s | 37 kB 00:00 2026-03-10T06:39:30.981 INFO:teuthology.orchestra.run.vm08.stdout: Installing : thrift-0.15.0-4.el9.x86_64 2/138 2026-03-10T06:39:31.021 INFO:teuthology.orchestra.run.vm09.stdout:(83/136): qatzip-libs-1.3.1-1.el9.x86_64.rpm 1.3 MB/s | 66 kB 00:00 2026-03-10T06:39:31.161 INFO:teuthology.orchestra.run.vm08.stdout: Installing : lttng-ust-2.12.0-6.el9.x86_64 3/138 2026-03-10T06:39:31.163 INFO:teuthology.orchestra.run.vm08.stdout: Upgrading : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-10T06:39:31.212 INFO:teuthology.orchestra.run.vm09.stdout:(84/136): qatlib-25.08.0-2.el9.x86_64.rpm 669 kB/s | 240 kB 00:00 2026-03-10T06:39:31.229 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-10T06:39:31.230 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/138 2026-03-10T06:39:31.261 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/138 2026-03-10T06:39:31.267 INFO:teuthology.orchestra.run.vm09.stdout:(85/136): socat-1.7.4.1-8.el9.x86_64.rpm 1.2 MB/s | 303 kB 00:00 2026-03-10T06:39:31.270 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 6/138 2026-03-10T06:39:31.274 INFO:teuthology.orchestra.run.vm08.stdout: Installing : librdkafka-1.6.1-102.el9.x86_64 7/138 2026-03-10T06:39:31.277 INFO:teuthology.orchestra.run.vm08.stdout: Installing : librabbitmq-0.11.0-7.el9.x86_64 8/138 2026-03-10T06:39:31.283 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-jaraco-8.2.1-3.el9.noarch 9/138 2026-03-10T06:39:31.297 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libnbd-1.20.3-4.el9.x86_64 10/138 2026-03-10T06:39:31.346 INFO:teuthology.orchestra.run.vm09.stdout:(86/136): xmlstarlet-1.6.1-20.el9.x86_64.rpm 474 kB/s | 64 kB 00:00 2026-03-10T06:39:31.360 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-10T06:39:31.364 INFO:teuthology.orchestra.run.vm09.stdout:(87/136): lua-devel-5.4.4-4.el9.x86_64.rpm 231 kB/s | 22 kB 00:00 2026-03-10T06:39:31.398 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-10T06:39:31.452 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 12/138 2026-03-10T06:39:31.466 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 12/138 2026-03-10T06:39:31.507 INFO:teuthology.orchestra.run.vm08.stdout: Installing : re2-1:20211101-20.el9.x86_64 13/138 2026-03-10T06:39:31.548 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libarrow-9.0.0-15.el9.x86_64 14/138 2026-03-10T06:39:31.555 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-werkzeug-2.0.3-3.el9.1.noarch 15/138 2026-03-10T06:39:31.575 INFO:teuthology.orchestra.run.vm09.stdout:(88/136): protobuf-compiler-3.14.0-17.el9.x86_6 3.7 MB/s | 862 kB 00:00 2026-03-10T06:39:31.585 INFO:teuthology.orchestra.run.vm08.stdout: Installing : liboath-2.6.12-1.el9.x86_64 16/138 2026-03-10T06:39:31.600 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-pyasn1-0.4.8-7.el9.noarch 17/138 2026-03-10T06:39:31.607 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-packaging-20.9-5.el9.noarch 18/138 2026-03-10T06:39:31.619 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-markupsafe-1.1.1-12.el9.x86_64 19/138 2026-03-10T06:39:31.627 INFO:teuthology.orchestra.run.vm08.stdout: Installing : protobuf-3.14.0-17.el9.x86_64 20/138 2026-03-10T06:39:31.632 INFO:teuthology.orchestra.run.vm08.stdout: Installing : lua-5.4.4-4.el9.x86_64 21/138 2026-03-10T06:39:31.638 INFO:teuthology.orchestra.run.vm08.stdout: Installing : flexiblas-3.0.4-9.el9.x86_64 22/138 2026-03-10T06:39:31.668 INFO:teuthology.orchestra.run.vm08.stdout: Installing : unzip-6.0-59.el9.x86_64 23/138 2026-03-10T06:39:31.742 INFO:teuthology.orchestra.run.vm01.stdout:(134/136): librados2-19.2.3-678.ge911bdeb.el9.x 3.1 MB/s | 3.4 MB 00:01 2026-03-10T06:39:31.757 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-urllib3-1.26.5-7.el9.noarch 24/138 2026-03-10T06:39:31.760 INFO:teuthology.orchestra.run.vm09.stdout:(89/136): abseil-cpp-20211102.0-4.el9.x86_64.rp 1.4 MB/s | 551 kB 00:00 2026-03-10T06:39:31.762 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-requests-2.25.1-10.el9.noarch 25/138 2026-03-10T06:39:31.769 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libquadmath-11.5.0-14.el9.x86_64 26/138 2026-03-10T06:39:31.772 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libgfortran-11.5.0-14.el9.x86_64 27/138 2026-03-10T06:39:31.804 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ledmon-libs-1.1.0-3.el9.x86_64 28/138 2026-03-10T06:39:31.870 INFO:teuthology.orchestra.run.vm09.stdout:(90/136): grpc-data-1.46.7-10.el9.noarch.rpm 178 kB/s | 19 kB 00:00 2026-03-10T06:39:31.887 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 29/138 2026-03-10T06:39:31.897 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 30/138 2026-03-10T06:39:31.912 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 31/138 2026-03-10T06:39:31.921 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-requests-oauthlib-1.3.0-12.el9.noarch 32/138 2026-03-10T06:39:31.952 INFO:teuthology.orchestra.run.vm08.stdout: Installing : zip-3.0-35.el9.x86_64 33/138 2026-03-10T06:39:31.959 INFO:teuthology.orchestra.run.vm08.stdout: Installing : luarocks-3.9.2-5.el9.noarch 34/138 2026-03-10T06:39:31.965 INFO:teuthology.orchestra.run.vm09.stdout:(91/136): gperftools-libs-2.9.1-3.el9.x86_64.rp 789 kB/s | 308 kB 00:00 2026-03-10T06:39:31.970 INFO:teuthology.orchestra.run.vm08.stdout: Installing : lua-devel-5.4.4-4.el9.x86_64 35/138 2026-03-10T06:39:32.000 INFO:teuthology.orchestra.run.vm01.stdout:(135/136): librbd1-19.2.3-678.ge911bdeb.el9.x86 2.9 MB/s | 3.2 MB 00:01 2026-03-10T06:39:32.011 INFO:teuthology.orchestra.run.vm08.stdout: Installing : protobuf-compiler-3.14.0-17.el9.x86_64 36/138 2026-03-10T06:39:32.014 INFO:teuthology.orchestra.run.vm09.stdout:(92/136): libarrow-doc-9.0.0-15.el9.noarch.rpm 510 kB/s | 25 kB 00:00 2026-03-10T06:39:32.072 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-mako-1.1.4-6.el9.noarch 37/138 2026-03-10T06:39:32.093 INFO:teuthology.orchestra.run.vm09.stdout:(93/136): liboath-2.6.12-1.el9.x86_64.rpm 622 kB/s | 49 kB 00:00 2026-03-10T06:39:32.108 INFO:teuthology.orchestra.run.vm09.stdout:(94/136): libarrow-9.0.0-15.el9.x86_64.rpm 19 MB/s | 4.4 MB 00:00 2026-03-10T06:39:32.111 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-pyasn1-modules-0.4.8-7.el9.noarch 38/138 2026-03-10T06:39:32.122 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-rsa-4.9-2.el9.noarch 39/138 2026-03-10T06:39:32.133 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-jaraco-classes-3.2.1-5.el9.noarch 40/138 2026-03-10T06:39:32.139 INFO:teuthology.orchestra.run.vm08.stdout: Installing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 41/138 2026-03-10T06:39:32.145 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-zc-lockfile-2.0-10.el9.noarch 42/138 2026-03-10T06:39:32.146 INFO:teuthology.orchestra.run.vm09.stdout:(95/136): libunwind-1.6.2-1.el9.x86_64.rpm 1.3 MB/s | 67 kB 00:00 2026-03-10T06:39:32.157 INFO:teuthology.orchestra.run.vm09.stdout:(96/136): luarocks-3.9.2-5.el9.noarch.rpm 3.0 MB/s | 151 kB 00:00 2026-03-10T06:39:32.166 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-xmltodict-0.12.0-15.el9.noarch 43/138 2026-03-10T06:39:32.193 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-websocket-client-1.2.3-2.el9.noarch 44/138 2026-03-10T06:39:32.201 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-webob-1.8.8-2.el9.noarch 45/138 2026-03-10T06:39:32.209 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-typing-extensions-4.15.0-1.el9.noarch 46/138 2026-03-10T06:39:32.212 INFO:teuthology.orchestra.run.vm09.stdout:(97/136): python3-asyncssh-2.13.2-5.el9.noarch. 9.8 MB/s | 548 kB 00:00 2026-03-10T06:39:32.224 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-repoze-lru-0.7-16.el9.noarch 47/138 2026-03-10T06:39:32.237 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-routes-2.5.1-5.el9.noarch 48/138 2026-03-10T06:39:32.249 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-natsort-7.1.1-5.el9.noarch 49/138 2026-03-10T06:39:32.260 INFO:teuthology.orchestra.run.vm09.stdout:(98/136): python3-autocommand-2.2.2-8.el9.noarc 620 kB/s | 29 kB 00:00 2026-03-10T06:39:32.285 INFO:teuthology.orchestra.run.vm09.stdout:(99/136): parquet-libs-9.0.0-15.el9.x86_64.rpm 5.9 MB/s | 838 kB 00:00 2026-03-10T06:39:32.309 INFO:teuthology.orchestra.run.vm09.stdout:(100/136): python3-backports-tarfile-1.2.0-1.el 1.2 MB/s | 60 kB 00:00 2026-03-10T06:39:32.319 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-logutils-0.3.5-21.el9.noarch 50/138 2026-03-10T06:39:32.327 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-pecan-1.4.2-3.el9.noarch 51/138 2026-03-10T06:39:32.338 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-certifi-2023.05.07-4.el9.noarch 52/138 2026-03-10T06:39:32.341 INFO:teuthology.orchestra.run.vm09.stdout:(101/136): python3-bcrypt-3.2.2-1.el9.x86_64.rp 781 kB/s | 43 kB 00:00 2026-03-10T06:39:32.367 INFO:teuthology.orchestra.run.vm09.stdout:(102/136): python3-cachetools-4.2.4-1.el9.noarc 552 kB/s | 32 kB 00:00 2026-03-10T06:39:32.388 INFO:teuthology.orchestra.run.vm09.stdout:(103/136): python3-certifi-2023.05.07-4.el9.noa 300 kB/s | 14 kB 00:00 2026-03-10T06:39:32.388 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-cachetools-4.2.4-1.el9.noarch 53/138 2026-03-10T06:39:32.448 INFO:teuthology.orchestra.run.vm09.stdout:(104/136): python3-cherrypy-18.6.1-2.el9.noarch 5.9 MB/s | 358 kB 00:00 2026-03-10T06:39:32.487 INFO:teuthology.orchestra.run.vm09.stdout:(105/136): python3-cheroot-10.0.1-4.el9.noarch. 1.4 MB/s | 173 kB 00:00 2026-03-10T06:39:32.504 INFO:teuthology.orchestra.run.vm09.stdout:(106/136): python3-google-auth-2.45.0-1.el9.noa 4.4 MB/s | 254 kB 00:00 2026-03-10T06:39:32.560 INFO:teuthology.orchestra.run.vm09.stdout:(107/136): python3-grpcio-1.46.7-10.el9.x86_64. 28 MB/s | 2.0 MB 00:00 2026-03-10T06:39:32.562 INFO:teuthology.orchestra.run.vm09.stdout:(108/136): python3-grpcio-tools-1.46.7-10.el9.x 2.5 MB/s | 144 kB 00:00 2026-03-10T06:39:32.606 INFO:teuthology.orchestra.run.vm09.stdout:(109/136): python3-jaraco-8.2.1-3.el9.noarch.rp 232 kB/s | 11 kB 00:00 2026-03-10T06:39:32.609 INFO:teuthology.orchestra.run.vm09.stdout:(110/136): python3-jaraco-classes-3.2.1-5.el9.n 378 kB/s | 18 kB 00:00 2026-03-10T06:39:32.653 INFO:teuthology.orchestra.run.vm09.stdout:(111/136): python3-jaraco-collections-3.0.0-8.e 498 kB/s | 23 kB 00:00 2026-03-10T06:39:32.656 INFO:teuthology.orchestra.run.vm09.stdout:(112/136): python3-jaraco-context-6.0.1-3.el9.n 420 kB/s | 20 kB 00:00 2026-03-10T06:39:32.699 INFO:teuthology.orchestra.run.vm09.stdout:(113/136): python3-jaraco-functools-3.5.0-2.el9 419 kB/s | 19 kB 00:00 2026-03-10T06:39:32.703 INFO:teuthology.orchestra.run.vm09.stdout:(114/136): python3-jaraco-text-4.0.0-2.el9.noar 565 kB/s | 26 kB 00:00 2026-03-10T06:39:32.752 INFO:teuthology.orchestra.run.vm09.stdout:(115/136): python3-logutils-0.3.5-21.el9.noarch 936 kB/s | 46 kB 00:00 2026-03-10T06:39:32.759 INFO:teuthology.orchestra.run.vm09.stdout:(116/136): python3-kubernetes-26.1.0-3.el9.noar 17 MB/s | 1.0 MB 00:00 2026-03-10T06:39:32.796 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-google-auth-1:2.45.0-1.el9.noarch 54/138 2026-03-10T06:39:32.804 INFO:teuthology.orchestra.run.vm09.stdout:(117/136): python3-more-itertools-8.12.0-2.el9. 1.5 MB/s | 79 kB 00:00 2026-03-10T06:39:32.811 INFO:teuthology.orchestra.run.vm09.stdout:(118/136): python3-natsort-7.1.1-5.el9.noarch.r 1.1 MB/s | 58 kB 00:00 2026-03-10T06:39:32.813 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-kubernetes-1:26.1.0-3.el9.noarch 55/138 2026-03-10T06:39:32.820 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-backports-tarfile-1.2.0-1.el9.noarch 56/138 2026-03-10T06:39:32.829 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-jaraco-context-6.0.1-3.el9.noarch 57/138 2026-03-10T06:39:32.834 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-autocommand-2.2.2-8.el9.noarch 58/138 2026-03-10T06:39:32.844 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libunwind-1.6.2-1.el9.x86_64 59/138 2026-03-10T06:39:32.849 INFO:teuthology.orchestra.run.vm08.stdout: Installing : gperftools-libs-2.9.1-3.el9.x86_64 60/138 2026-03-10T06:39:32.855 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libarrow-doc-9.0.0-15.el9.noarch 61/138 2026-03-10T06:39:32.858 INFO:teuthology.orchestra.run.vm09.stdout:(119/136): python3-portend-3.1.0-2.el9.noarch.r 354 kB/s | 16 kB 00:00 2026-03-10T06:39:32.861 INFO:teuthology.orchestra.run.vm09.stdout:(120/136): python3-pecan-1.4.2-3.el9.noarch.rpm 4.7 MB/s | 272 kB 00:00 2026-03-10T06:39:32.889 INFO:teuthology.orchestra.run.vm08.stdout: Installing : grpc-data-1.46.7-10.el9.noarch 62/138 2026-03-10T06:39:32.911 INFO:teuthology.orchestra.run.vm09.stdout:(121/136): python3-repoze-lru-0.7-16.el9.noarch 626 kB/s | 31 kB 00:00 2026-03-10T06:39:32.940 INFO:teuthology.orchestra.run.vm09.stdout:(122/136): python3-pyOpenSSL-21.0.0-1.el9.noarc 1.1 MB/s | 90 kB 00:00 2026-03-10T06:39:32.946 INFO:teuthology.orchestra.run.vm08.stdout: Installing : abseil-cpp-20211102.0-4.el9.x86_64 63/138 2026-03-10T06:39:32.962 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-grpcio-1.46.7-10.el9.x86_64 64/138 2026-03-10T06:39:32.965 INFO:teuthology.orchestra.run.vm09.stdout:(123/136): python3-routes-2.5.1-5.el9.noarch.rp 3.4 MB/s | 188 kB 00:00 2026-03-10T06:39:32.971 INFO:teuthology.orchestra.run.vm08.stdout: Installing : socat-1.7.4.1-8.el9.x86_64 65/138 2026-03-10T06:39:32.977 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-toml-0.10.2-6.el9.noarch 66/138 2026-03-10T06:39:32.986 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-jaraco-functools-3.5.0-2.el9.noarch 67/138 2026-03-10T06:39:32.988 INFO:teuthology.orchestra.run.vm09.stdout:(124/136): python3-rsa-4.9-2.el9.noarch.rpm 1.2 MB/s | 59 kB 00:00 2026-03-10T06:39:32.992 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-jaraco-text-4.0.0-2.el9.noarch 68/138 2026-03-10T06:39:33.001 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-jaraco-collections-3.0.0-8.el9.noarch 69/138 2026-03-10T06:39:33.007 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-tempora-5.0.0-2.el9.noarch 70/138 2026-03-10T06:39:33.012 INFO:teuthology.orchestra.run.vm09.stdout:(125/136): python3-tempora-5.0.0-2.el9.noarch.r 759 kB/s | 36 kB 00:00 2026-03-10T06:39:33.041 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-portend-3.1.0-2.el9.noarch 71/138 2026-03-10T06:39:33.044 INFO:teuthology.orchestra.run.vm09.stdout:(126/136): python3-typing-extensions-4.15.0-1.e 1.5 MB/s | 86 kB 00:00 2026-03-10T06:39:33.055 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-protobuf-3.14.0-17.el9.noarch 72/138 2026-03-10T06:39:33.068 INFO:teuthology.orchestra.run.vm09.stdout:(127/136): python3-webob-1.8.8-2.el9.noarch.rpm 4.1 MB/s | 230 kB 00:00 2026-03-10T06:39:33.093 INFO:teuthology.orchestra.run.vm09.stdout:(128/136): python3-websocket-client-1.2.3-2.el9 1.8 MB/s | 90 kB 00:00 2026-03-10T06:39:33.099 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-grpcio-tools-1.46.7-10.el9.x86_64 73/138 2026-03-10T06:39:33.132 INFO:teuthology.orchestra.run.vm09.stdout:(129/136): python3-werkzeug-2.0.3-3.el9.1.noarc 6.6 MB/s | 427 kB 00:00 2026-03-10T06:39:33.172 INFO:teuthology.orchestra.run.vm09.stdout:(130/136): python3-xmltodict-0.12.0-15.el9.noar 281 kB/s | 22 kB 00:00 2026-03-10T06:39:33.182 INFO:teuthology.orchestra.run.vm09.stdout:(131/136): python3-zc-lockfile-2.0-10.el9.noarc 399 kB/s | 20 kB 00:00 2026-03-10T06:39:33.222 INFO:teuthology.orchestra.run.vm09.stdout:(132/136): re2-20211101-20.el9.x86_64.rpm 3.8 MB/s | 191 kB 00:00 2026-03-10T06:39:33.352 INFO:teuthology.orchestra.run.vm09.stdout:(133/136): thrift-0.15.0-4.el9.x86_64.rpm 9.3 MB/s | 1.6 MB 00:00 2026-03-10T06:39:33.367 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-devel-3.9.25-3.el9.x86_64 74/138 2026-03-10T06:39:33.404 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-babel-2.9.1-2.el9.noarch 75/138 2026-03-10T06:39:33.411 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-jinja2-2.11.3-8.el9.noarch 76/138 2026-03-10T06:39:33.473 INFO:teuthology.orchestra.run.vm08.stdout: Installing : openblas-0.3.29-1.el9.x86_64 77/138 2026-03-10T06:39:33.477 INFO:teuthology.orchestra.run.vm08.stdout: Installing : openblas-openmp-0.3.29-1.el9.x86_64 78/138 2026-03-10T06:39:33.502 INFO:teuthology.orchestra.run.vm08.stdout: Installing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 79/138 2026-03-10T06:39:33.885 INFO:teuthology.orchestra.run.vm08.stdout: Installing : flexiblas-netlib-3.0.4-9.el9.x86_64 80/138 2026-03-10T06:39:33.980 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-numpy-1:1.23.5-2.el9.x86_64 81/138 2026-03-10T06:39:34.408 INFO:teuthology.orchestra.run.vm09.stdout:(134/136): librados2-19.2.3-678.ge911bdeb.el9.x 2.9 MB/s | 3.4 MB 00:01 2026-03-10T06:39:34.429 INFO:teuthology.orchestra.run.vm09.stdout:(135/136): librbd1-19.2.3-678.ge911bdeb.el9.x86 2.9 MB/s | 3.2 MB 00:01 2026-03-10T06:39:34.773 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 82/138 2026-03-10T06:39:34.803 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-scipy-1.9.3-2.el9.x86_64 83/138 2026-03-10T06:39:34.833 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libxslt-1.1.34-12.el9.x86_64 84/138 2026-03-10T06:39:34.839 INFO:teuthology.orchestra.run.vm08.stdout: Installing : xmlstarlet-1.6.1-20.el9.x86_64 85/138 2026-03-10T06:39:34.995 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libpmemobj-1.12.1-1.el9.x86_64 86/138 2026-03-10T06:39:34.999 INFO:teuthology.orchestra.run.vm08.stdout: Upgrading : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 87/138 2026-03-10T06:39:35.030 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 87/138 2026-03-10T06:39:35.035 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 88/138 2026-03-10T06:39:35.043 INFO:teuthology.orchestra.run.vm08.stdout: Installing : boost-program-options-1.75.0-13.el9.x86_64 89/138 2026-03-10T06:39:35.305 INFO:teuthology.orchestra.run.vm08.stdout: Installing : parquet-libs-9.0.0-15.el9.x86_64 90/138 2026-03-10T06:39:35.307 INFO:teuthology.orchestra.run.vm08.stdout: Installing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 91/138 2026-03-10T06:39:35.327 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 91/138 2026-03-10T06:39:35.329 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 92/138 2026-03-10T06:39:36.437 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-10T06:39:36.449 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-10T06:39:36.472 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-10T06:39:36.488 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-ply-3.11-14.el9.noarch 94/138 2026-03-10T06:39:36.512 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-pycparser-2.20-6.el9.noarch 95/138 2026-03-10T06:39:36.606 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-cffi-1.14.5-5.el9.x86_64 96/138 2026-03-10T06:39:36.669 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-cryptography-36.0.1-5.el9.x86_64 97/138 2026-03-10T06:39:36.698 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-pyOpenSSL-21.0.0-1.el9.noarch 98/138 2026-03-10T06:39:36.737 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-cheroot-10.0.1-4.el9.noarch 99/138 2026-03-10T06:39:36.801 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-cherrypy-18.6.1-2.el9.noarch 100/138 2026-03-10T06:39:36.812 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-asyncssh-2.13.2-5.el9.noarch 101/138 2026-03-10T06:39:36.818 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-bcrypt-3.2.2-1.el9.x86_64 102/138 2026-03-10T06:39:36.824 INFO:teuthology.orchestra.run.vm08.stdout: Installing : pciutils-3.7.0-7.el9.x86_64 103/138 2026-03-10T06:39:36.829 INFO:teuthology.orchestra.run.vm08.stdout: Installing : qatlib-25.08.0-2.el9.x86_64 104/138 2026-03-10T06:39:36.831 INFO:teuthology.orchestra.run.vm08.stdout: Installing : qatlib-service-25.08.0-2.el9.x86_64 105/138 2026-03-10T06:39:36.848 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 105/138 2026-03-10T06:39:37.148 INFO:teuthology.orchestra.run.vm08.stdout: Installing : qatzip-libs-1.3.1-1.el9.x86_64 106/138 2026-03-10T06:39:37.154 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 107/138 2026-03-10T06:39:37.196 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 107/138 2026-03-10T06:39:37.196 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /usr/lib/systemd/system/ceph.target. 2026-03-10T06:39:37.196 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /usr/lib/systemd/system/ceph-crash.service. 2026-03-10T06:39:37.196 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:39:37.201 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 108/138 2026-03-10T06:39:43.497 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 108/138 2026-03-10T06:39:43.498 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /sys 2026-03-10T06:39:43.498 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /proc 2026-03-10T06:39:43.498 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /mnt 2026-03-10T06:39:43.498 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /var/tmp 2026-03-10T06:39:43.498 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /home 2026-03-10T06:39:43.498 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /root 2026-03-10T06:39:43.498 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /tmp 2026-03-10T06:39:43.498 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:39:43.619 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 109/138 2026-03-10T06:39:43.643 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 109/138 2026-03-10T06:39:43.643 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:39:43.643 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-10T06:39:43.643 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-10T06:39:43.643 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-10T06:39:43.643 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:39:43.871 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 110/138 2026-03-10T06:39:43.893 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 110/138 2026-03-10T06:39:43.893 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:39:43.893 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-10T06:39:43.893 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-10T06:39:43.893 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-10T06:39:43.893 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:39:43.901 INFO:teuthology.orchestra.run.vm08.stdout: Installing : mailcap-2.1.49-5.el9.noarch 111/138 2026-03-10T06:39:43.904 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libconfig-1.7.2-9.el9.x86_64 112/138 2026-03-10T06:39:43.923 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-10T06:39:43.923 INFO:teuthology.orchestra.run.vm08.stdout:Creating group 'qat' with GID 994. 2026-03-10T06:39:43.923 INFO:teuthology.orchestra.run.vm08.stdout:Creating group 'libstoragemgmt' with GID 993. 2026-03-10T06:39:43.923 INFO:teuthology.orchestra.run.vm08.stdout:Creating user 'libstoragemgmt' (daemon account for libstoragemgmt) with UID 993 and GID 993. 2026-03-10T06:39:43.923 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:39:43.934 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-10T06:39:43.960 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-10T06:39:43.960 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/libstoragemgmt.service → /usr/lib/systemd/system/libstoragemgmt.service. 2026-03-10T06:39:43.960 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:39:44.001 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 114/138 2026-03-10T06:39:44.077 INFO:teuthology.orchestra.run.vm08.stdout: Installing : cryptsetup-2.8.1-3.el9.x86_64 115/138 2026-03-10T06:39:44.082 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 116/138 2026-03-10T06:39:44.095 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 116/138 2026-03-10T06:39:44.095 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:39:44.095 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-10T06:39:44.095 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:39:44.868 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 117/138 2026-03-10T06:39:44.893 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 117/138 2026-03-10T06:39:44.893 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:39:44.893 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-10T06:39:44.893 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-10T06:39:44.893 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-10T06:39:44.893 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:39:44.952 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 118/138 2026-03-10T06:39:44.955 INFO:teuthology.orchestra.run.vm08.stdout: Installing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 118/138 2026-03-10T06:39:44.962 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 119/138 2026-03-10T06:39:44.984 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 120/138 2026-03-10T06:39:44.987 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 121/138 2026-03-10T06:39:45.518 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 121/138 2026-03-10T06:39:45.529 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 122/138 2026-03-10T06:39:46.031 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 122/138 2026-03-10T06:39:46.033 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 123/138 2026-03-10T06:39:46.091 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 123/138 2026-03-10T06:39:46.147 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 124/138 2026-03-10T06:39:46.150 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 125/138 2026-03-10T06:39:46.170 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 125/138 2026-03-10T06:39:46.170 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:39:46.170 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-10T06:39:46.170 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-10T06:39:46.170 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-10T06:39:46.170 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:39:46.183 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 126/138 2026-03-10T06:39:46.194 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 126/138 2026-03-10T06:39:46.687 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 127/138 2026-03-10T06:39:46.691 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 128/138 2026-03-10T06:39:46.710 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 128/138 2026-03-10T06:39:46.710 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:39:46.710 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-10T06:39:46.710 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-10T06:39:46.710 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-10T06:39:46.710 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:39:46.721 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 129/138 2026-03-10T06:39:46.740 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 129/138 2026-03-10T06:39:46.740 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:39:46.740 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-10T06:39:46.740 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:39:46.889 INFO:teuthology.orchestra.run.vm08.stdout: Installing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 130/138 2026-03-10T06:39:46.911 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 130/138 2026-03-10T06:39:46.911 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:39:46.911 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-10T06:39:46.911 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-10T06:39:46.911 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-10T06:39:46.911 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:39:49.390 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 131/138 2026-03-10T06:39:49.401 INFO:teuthology.orchestra.run.vm08.stdout: Installing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 132/138 2026-03-10T06:39:49.406 INFO:teuthology.orchestra.run.vm08.stdout: Installing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 133/138 2026-03-10T06:39:49.461 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 134/138 2026-03-10T06:39:49.471 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 135/138 2026-03-10T06:39:49.475 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-jmespath-1.0.1-1.el9.noarch 136/138 2026-03-10T06:39:49.475 INFO:teuthology.orchestra.run.vm08.stdout: Cleanup : librbd1-2:16.2.4-5.el9.x86_64 137/138 2026-03-10T06:39:49.490 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: librbd1-2:16.2.4-5.el9.x86_64 137/138 2026-03-10T06:39:49.490 INFO:teuthology.orchestra.run.vm08.stdout: Cleanup : librados2-2:16.2.4-5.el9.x86_64 138/138 2026-03-10T06:39:50.777 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: librados2-2:16.2.4-5.el9.x86_64 138/138 2026-03-10T06:39:50.777 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/138 2026-03-10T06:39:50.777 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/138 2026-03-10T06:39:50.777 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/138 2026-03-10T06:39:50.777 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-10T06:39:50.777 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/138 2026-03-10T06:39:50.777 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 6/138 2026-03-10T06:39:50.777 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 7/138 2026-03-10T06:39:50.777 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/138 2026-03-10T06:39:50.777 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 9/138 2026-03-10T06:39:50.777 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 10/138 2026-03-10T06:39:50.777 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-10T06:39:50.777 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 12/138 2026-03-10T06:39:50.778 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 13/138 2026-03-10T06:39:50.778 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 14/138 2026-03-10T06:39:50.778 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 15/138 2026-03-10T06:39:50.778 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 16/138 2026-03-10T06:39:50.778 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 17/138 2026-03-10T06:39:50.778 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 18/138 2026-03-10T06:39:50.778 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 19/138 2026-03-10T06:39:50.778 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 20/138 2026-03-10T06:39:50.778 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 21/138 2026-03-10T06:39:50.778 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 22/138 2026-03-10T06:39:50.778 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 23/138 2026-03-10T06:39:50.778 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 24/138 2026-03-10T06:39:50.778 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 25/138 2026-03-10T06:39:50.778 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 26/138 2026-03-10T06:39:50.778 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 27/138 2026-03-10T06:39:50.778 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 28/138 2026-03-10T06:39:50.778 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 29/138 2026-03-10T06:39:50.778 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 30/138 2026-03-10T06:39:50.778 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 31/138 2026-03-10T06:39:50.778 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 32/138 2026-03-10T06:39:50.778 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 33/138 2026-03-10T06:39:50.778 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 34/138 2026-03-10T06:39:50.778 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 35/138 2026-03-10T06:39:50.778 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 36/138 2026-03-10T06:39:50.778 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 37/138 2026-03-10T06:39:50.778 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 38/138 2026-03-10T06:39:50.778 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 39/138 2026-03-10T06:39:50.778 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 40/138 2026-03-10T06:39:50.780 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 41/138 2026-03-10T06:39:50.780 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 42/138 2026-03-10T06:39:50.780 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 43/138 2026-03-10T06:39:50.780 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/138 2026-03-10T06:39:50.780 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 45/138 2026-03-10T06:39:50.780 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-ply-3.11-14.el9.noarch 46/138 2026-03-10T06:39:50.780 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 47/138 2026-03-10T06:39:50.780 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 48/138 2026-03-10T06:39:50.780 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 49/138 2026-03-10T06:39:50.780 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : unzip-6.0-59.el9.x86_64 50/138 2026-03-10T06:39:50.780 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : zip-3.0-35.el9.x86_64 51/138 2026-03-10T06:39:50.780 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 52/138 2026-03-10T06:39:50.780 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 53/138 2026-03-10T06:39:50.780 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 54/138 2026-03-10T06:39:50.780 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 55/138 2026-03-10T06:39:50.780 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 56/138 2026-03-10T06:39:50.780 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 57/138 2026-03-10T06:39:50.780 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 58/138 2026-03-10T06:39:50.780 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 59/138 2026-03-10T06:39:50.780 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 60/138 2026-03-10T06:39:50.780 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 61/138 2026-03-10T06:39:50.780 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 62/138 2026-03-10T06:39:50.780 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : lua-5.4.4-4.el9.x86_64 63/138 2026-03-10T06:39:50.780 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 64/138 2026-03-10T06:39:50.780 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 65/138 2026-03-10T06:39:50.780 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 66/138 2026-03-10T06:39:50.780 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 67/138 2026-03-10T06:39:50.780 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 68/138 2026-03-10T06:39:50.780 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 69/138 2026-03-10T06:39:50.780 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jmespath-1.0.1-1.el9.noarch 70/138 2026-03-10T06:39:50.780 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 71/138 2026-03-10T06:39:50.780 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 72/138 2026-03-10T06:39:50.780 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 73/138 2026-03-10T06:39:50.780 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 74/138 2026-03-10T06:39:50.780 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 75/138 2026-03-10T06:39:50.780 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 76/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 77/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 78/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 79/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 80/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 81/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 82/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 83/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 84/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 85/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 86/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 87/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 88/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 89/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 90/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 91/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 92/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 93/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 94/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 95/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 96/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 97/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 98/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 99/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 100/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 101/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 102/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 103/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 104/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 105/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 106/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 107/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 108/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 109/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 110/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 111/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 112/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 113/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 114/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 115/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 116/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 117/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 118/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 119/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 120/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 121/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 122/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 123/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 124/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 125/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 126/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 127/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 128/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 129/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 130/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-xmltodict-0.12.0-15.el9.noarch 131/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 132/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : re2-1:20211101-20.el9.x86_64 133/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 134/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 135/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librados2-2:16.2.4-5.el9.x86_64 136/138 2026-03-10T06:39:50.781 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 137/138 2026-03-10T06:39:50.883 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librbd1-2:16.2.4-5.el9.x86_64 138/138 2026-03-10T06:39:50.883 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:39:50.883 INFO:teuthology.orchestra.run.vm08.stdout:Upgraded: 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout:Installed: 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: libxslt-1.1.34-12.el9.x86_64 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: lua-5.4.4-4.el9.x86_64 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: mailcap-2.1.49-5.el9.noarch 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-10T06:39:50.884 INFO:teuthology.orchestra.run.vm08.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-jmespath-1.0.1-1.el9.noarch 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-ply-3.11-14.el9.noarch 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-10T06:39:50.885 INFO:teuthology.orchestra.run.vm08.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-10T06:39:50.886 INFO:teuthology.orchestra.run.vm08.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-10T06:39:50.886 INFO:teuthology.orchestra.run.vm08.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-10T06:39:50.886 INFO:teuthology.orchestra.run.vm08.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-10T06:39:50.886 INFO:teuthology.orchestra.run.vm08.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-10T06:39:50.886 INFO:teuthology.orchestra.run.vm08.stdout: python3-xmltodict-0.12.0-15.el9.noarch 2026-03-10T06:39:50.886 INFO:teuthology.orchestra.run.vm08.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-10T06:39:50.886 INFO:teuthology.orchestra.run.vm08.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-10T06:39:50.886 INFO:teuthology.orchestra.run.vm08.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-10T06:39:50.886 INFO:teuthology.orchestra.run.vm08.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-10T06:39:50.886 INFO:teuthology.orchestra.run.vm08.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:39:50.886 INFO:teuthology.orchestra.run.vm08.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:39:50.886 INFO:teuthology.orchestra.run.vm08.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:39:50.886 INFO:teuthology.orchestra.run.vm08.stdout: re2-1:20211101-20.el9.x86_64 2026-03-10T06:39:50.886 INFO:teuthology.orchestra.run.vm08.stdout: socat-1.7.4.1-8.el9.x86_64 2026-03-10T06:39:50.886 INFO:teuthology.orchestra.run.vm08.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-10T06:39:50.886 INFO:teuthology.orchestra.run.vm08.stdout: unzip-6.0-59.el9.x86_64 2026-03-10T06:39:50.886 INFO:teuthology.orchestra.run.vm08.stdout: xmlstarlet-1.6.1-20.el9.x86_64 2026-03-10T06:39:50.886 INFO:teuthology.orchestra.run.vm08.stdout: zip-3.0-35.el9.x86_64 2026-03-10T06:39:50.886 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:39:50.886 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:39:50.980 DEBUG:teuthology.parallel:result is None 2026-03-10T06:39:52.821 INFO:teuthology.orchestra.run.vm01.stdout:(136/136): python3-scipy-1.9.3-2.el9.x86_64.rpm 742 kB/s | 19 MB 00:26 2026-03-10T06:39:52.823 INFO:teuthology.orchestra.run.vm01.stdout:-------------------------------------------------------------------------------- 2026-03-10T06:39:52.823 INFO:teuthology.orchestra.run.vm01.stdout:Total 3.8 MB/s | 210 MB 00:54 2026-03-10T06:39:53.320 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction check 2026-03-10T06:39:53.368 INFO:teuthology.orchestra.run.vm01.stdout:Transaction check succeeded. 2026-03-10T06:39:53.368 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction test 2026-03-10T06:39:54.192 INFO:teuthology.orchestra.run.vm01.stdout:Transaction test succeeded. 2026-03-10T06:39:54.192 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction 2026-03-10T06:39:55.068 INFO:teuthology.orchestra.run.vm01.stdout: Preparing : 1/1 2026-03-10T06:39:55.082 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-more-itertools-8.12.0-2.el9.noarch 1/138 2026-03-10T06:39:55.094 INFO:teuthology.orchestra.run.vm01.stdout: Installing : thrift-0.15.0-4.el9.x86_64 2/138 2026-03-10T06:39:55.263 INFO:teuthology.orchestra.run.vm01.stdout: Installing : lttng-ust-2.12.0-6.el9.x86_64 3/138 2026-03-10T06:39:55.265 INFO:teuthology.orchestra.run.vm01.stdout: Upgrading : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-10T06:39:55.325 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-10T06:39:55.327 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/138 2026-03-10T06:39:55.356 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/138 2026-03-10T06:39:55.365 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 6/138 2026-03-10T06:39:55.369 INFO:teuthology.orchestra.run.vm01.stdout: Installing : librdkafka-1.6.1-102.el9.x86_64 7/138 2026-03-10T06:39:55.371 INFO:teuthology.orchestra.run.vm01.stdout: Installing : librabbitmq-0.11.0-7.el9.x86_64 8/138 2026-03-10T06:39:55.377 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-jaraco-8.2.1-3.el9.noarch 9/138 2026-03-10T06:39:55.387 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libnbd-1.20.3-4.el9.x86_64 10/138 2026-03-10T06:39:55.388 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-10T06:39:55.429 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-10T06:39:55.430 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 12/138 2026-03-10T06:39:55.460 INFO:teuthology.orchestra.run.vm09.stdout:(136/136): python3-scipy-1.9.3-2.el9.x86_64.rpm 632 kB/s | 19 MB 00:31 2026-03-10T06:39:55.464 INFO:teuthology.orchestra.run.vm09.stdout:-------------------------------------------------------------------------------- 2026-03-10T06:39:55.464 INFO:teuthology.orchestra.run.vm09.stdout:Total 3.7 MB/s | 210 MB 00:56 2026-03-10T06:39:55.470 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 12/138 2026-03-10T06:39:55.507 INFO:teuthology.orchestra.run.vm01.stdout: Installing : re2-1:20211101-20.el9.x86_64 13/138 2026-03-10T06:39:55.544 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libarrow-9.0.0-15.el9.x86_64 14/138 2026-03-10T06:39:55.550 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-werkzeug-2.0.3-3.el9.1.noarch 15/138 2026-03-10T06:39:55.575 INFO:teuthology.orchestra.run.vm01.stdout: Installing : liboath-2.6.12-1.el9.x86_64 16/138 2026-03-10T06:39:55.589 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-pyasn1-0.4.8-7.el9.noarch 17/138 2026-03-10T06:39:55.597 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-packaging-20.9-5.el9.noarch 18/138 2026-03-10T06:39:55.608 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-markupsafe-1.1.1-12.el9.x86_64 19/138 2026-03-10T06:39:55.615 INFO:teuthology.orchestra.run.vm01.stdout: Installing : protobuf-3.14.0-17.el9.x86_64 20/138 2026-03-10T06:39:55.620 INFO:teuthology.orchestra.run.vm01.stdout: Installing : lua-5.4.4-4.el9.x86_64 21/138 2026-03-10T06:39:55.625 INFO:teuthology.orchestra.run.vm01.stdout: Installing : flexiblas-3.0.4-9.el9.x86_64 22/138 2026-03-10T06:39:55.654 INFO:teuthology.orchestra.run.vm01.stdout: Installing : unzip-6.0-59.el9.x86_64 23/138 2026-03-10T06:39:55.671 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-urllib3-1.26.5-7.el9.noarch 24/138 2026-03-10T06:39:55.676 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-requests-2.25.1-10.el9.noarch 25/138 2026-03-10T06:39:55.684 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libquadmath-11.5.0-14.el9.x86_64 26/138 2026-03-10T06:39:55.686 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libgfortran-11.5.0-14.el9.x86_64 27/138 2026-03-10T06:39:55.717 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ledmon-libs-1.1.0-3.el9.x86_64 28/138 2026-03-10T06:39:55.724 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 29/138 2026-03-10T06:39:55.735 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 30/138 2026-03-10T06:39:55.750 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 31/138 2026-03-10T06:39:55.759 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-requests-oauthlib-1.3.0-12.el9.noarch 32/138 2026-03-10T06:39:55.788 INFO:teuthology.orchestra.run.vm01.stdout: Installing : zip-3.0-35.el9.x86_64 33/138 2026-03-10T06:39:55.795 INFO:teuthology.orchestra.run.vm01.stdout: Installing : luarocks-3.9.2-5.el9.noarch 34/138 2026-03-10T06:39:55.803 INFO:teuthology.orchestra.run.vm01.stdout: Installing : lua-devel-5.4.4-4.el9.x86_64 35/138 2026-03-10T06:39:55.832 INFO:teuthology.orchestra.run.vm01.stdout: Installing : protobuf-compiler-3.14.0-17.el9.x86_64 36/138 2026-03-10T06:39:55.892 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-mako-1.1.4-6.el9.noarch 37/138 2026-03-10T06:39:55.908 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-pyasn1-modules-0.4.8-7.el9.noarch 38/138 2026-03-10T06:39:55.916 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-rsa-4.9-2.el9.noarch 39/138 2026-03-10T06:39:55.926 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-jaraco-classes-3.2.1-5.el9.noarch 40/138 2026-03-10T06:39:55.933 INFO:teuthology.orchestra.run.vm01.stdout: Installing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 41/138 2026-03-10T06:39:55.937 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-zc-lockfile-2.0-10.el9.noarch 42/138 2026-03-10T06:39:55.956 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-xmltodict-0.12.0-15.el9.noarch 43/138 2026-03-10T06:39:55.971 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-10T06:39:55.980 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-websocket-client-1.2.3-2.el9.noarch 44/138 2026-03-10T06:39:55.987 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-webob-1.8.8-2.el9.noarch 45/138 2026-03-10T06:39:55.994 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-typing-extensions-4.15.0-1.el9.noarch 46/138 2026-03-10T06:39:56.008 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-repoze-lru-0.7-16.el9.noarch 47/138 2026-03-10T06:39:56.020 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-routes-2.5.1-5.el9.noarch 48/138 2026-03-10T06:39:56.021 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-10T06:39:56.021 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-10T06:39:56.031 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-natsort-7.1.1-5.el9.noarch 49/138 2026-03-10T06:39:56.092 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-logutils-0.3.5-21.el9.noarch 50/138 2026-03-10T06:39:56.101 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-pecan-1.4.2-3.el9.noarch 51/138 2026-03-10T06:39:56.110 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-certifi-2023.05.07-4.el9.noarch 52/138 2026-03-10T06:39:56.156 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-cachetools-4.2.4-1.el9.noarch 53/138 2026-03-10T06:39:56.523 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-google-auth-1:2.45.0-1.el9.noarch 54/138 2026-03-10T06:39:56.542 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-kubernetes-1:26.1.0-3.el9.noarch 55/138 2026-03-10T06:39:56.547 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-backports-tarfile-1.2.0-1.el9.noarch 56/138 2026-03-10T06:39:56.556 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-jaraco-context-6.0.1-3.el9.noarch 57/138 2026-03-10T06:39:56.560 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-autocommand-2.2.2-8.el9.noarch 58/138 2026-03-10T06:39:56.569 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libunwind-1.6.2-1.el9.x86_64 59/138 2026-03-10T06:39:56.573 INFO:teuthology.orchestra.run.vm01.stdout: Installing : gperftools-libs-2.9.1-3.el9.x86_64 60/138 2026-03-10T06:39:56.575 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libarrow-doc-9.0.0-15.el9.noarch 61/138 2026-03-10T06:39:56.605 INFO:teuthology.orchestra.run.vm01.stdout: Installing : grpc-data-1.46.7-10.el9.noarch 62/138 2026-03-10T06:39:56.655 INFO:teuthology.orchestra.run.vm01.stdout: Installing : abseil-cpp-20211102.0-4.el9.x86_64 63/138 2026-03-10T06:39:56.669 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-grpcio-1.46.7-10.el9.x86_64 64/138 2026-03-10T06:39:56.677 INFO:teuthology.orchestra.run.vm01.stdout: Installing : socat-1.7.4.1-8.el9.x86_64 65/138 2026-03-10T06:39:56.682 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-toml-0.10.2-6.el9.noarch 66/138 2026-03-10T06:39:56.691 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-jaraco-functools-3.5.0-2.el9.noarch 67/138 2026-03-10T06:39:56.696 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-jaraco-text-4.0.0-2.el9.noarch 68/138 2026-03-10T06:39:56.705 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-jaraco-collections-3.0.0-8.el9.noarch 69/138 2026-03-10T06:39:56.711 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-tempora-5.0.0-2.el9.noarch 70/138 2026-03-10T06:39:56.743 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-portend-3.1.0-2.el9.noarch 71/138 2026-03-10T06:39:56.757 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-protobuf-3.14.0-17.el9.noarch 72/138 2026-03-10T06:39:56.798 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-grpcio-tools-1.46.7-10.el9.x86_64 73/138 2026-03-10T06:39:56.830 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-10T06:39:56.831 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-10T06:39:57.054 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-devel-3.9.25-3.el9.x86_64 74/138 2026-03-10T06:39:57.086 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-babel-2.9.1-2.el9.noarch 75/138 2026-03-10T06:39:57.093 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-jinja2-2.11.3-8.el9.noarch 76/138 2026-03-10T06:39:57.153 INFO:teuthology.orchestra.run.vm01.stdout: Installing : openblas-0.3.29-1.el9.x86_64 77/138 2026-03-10T06:39:57.157 INFO:teuthology.orchestra.run.vm01.stdout: Installing : openblas-openmp-0.3.29-1.el9.x86_64 78/138 2026-03-10T06:39:57.180 INFO:teuthology.orchestra.run.vm01.stdout: Installing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 79/138 2026-03-10T06:39:57.549 INFO:teuthology.orchestra.run.vm01.stdout: Installing : flexiblas-netlib-3.0.4-9.el9.x86_64 80/138 2026-03-10T06:39:57.641 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-numpy-1:1.23.5-2.el9.x86_64 81/138 2026-03-10T06:39:57.703 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-10T06:39:57.717 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-more-itertools-8.12.0-2.el9.noarch 1/138 2026-03-10T06:39:57.729 INFO:teuthology.orchestra.run.vm09.stdout: Installing : thrift-0.15.0-4.el9.x86_64 2/138 2026-03-10T06:39:57.898 INFO:teuthology.orchestra.run.vm09.stdout: Installing : lttng-ust-2.12.0-6.el9.x86_64 3/138 2026-03-10T06:39:57.900 INFO:teuthology.orchestra.run.vm09.stdout: Upgrading : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-10T06:39:57.962 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-10T06:39:57.964 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/138 2026-03-10T06:39:57.994 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/138 2026-03-10T06:39:58.003 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 6/138 2026-03-10T06:39:58.008 INFO:teuthology.orchestra.run.vm09.stdout: Installing : librdkafka-1.6.1-102.el9.x86_64 7/138 2026-03-10T06:39:58.010 INFO:teuthology.orchestra.run.vm09.stdout: Installing : librabbitmq-0.11.0-7.el9.x86_64 8/138 2026-03-10T06:39:58.015 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jaraco-8.2.1-3.el9.noarch 9/138 2026-03-10T06:39:58.026 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libnbd-1.20.3-4.el9.x86_64 10/138 2026-03-10T06:39:58.027 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-10T06:39:58.063 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-10T06:39:58.064 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 12/138 2026-03-10T06:39:58.077 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 12/138 2026-03-10T06:39:58.111 INFO:teuthology.orchestra.run.vm09.stdout: Installing : re2-1:20211101-20.el9.x86_64 13/138 2026-03-10T06:39:58.147 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libarrow-9.0.0-15.el9.x86_64 14/138 2026-03-10T06:39:58.152 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-werkzeug-2.0.3-3.el9.1.noarch 15/138 2026-03-10T06:39:58.177 INFO:teuthology.orchestra.run.vm09.stdout: Installing : liboath-2.6.12-1.el9.x86_64 16/138 2026-03-10T06:39:58.191 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-pyasn1-0.4.8-7.el9.noarch 17/138 2026-03-10T06:39:58.200 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-packaging-20.9-5.el9.noarch 18/138 2026-03-10T06:39:58.210 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-markupsafe-1.1.1-12.el9.x86_64 19/138 2026-03-10T06:39:58.216 INFO:teuthology.orchestra.run.vm09.stdout: Installing : protobuf-3.14.0-17.el9.x86_64 20/138 2026-03-10T06:39:58.221 INFO:teuthology.orchestra.run.vm09.stdout: Installing : lua-5.4.4-4.el9.x86_64 21/138 2026-03-10T06:39:58.226 INFO:teuthology.orchestra.run.vm09.stdout: Installing : flexiblas-3.0.4-9.el9.x86_64 22/138 2026-03-10T06:39:58.253 INFO:teuthology.orchestra.run.vm09.stdout: Installing : unzip-6.0-59.el9.x86_64 23/138 2026-03-10T06:39:58.269 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-urllib3-1.26.5-7.el9.noarch 24/138 2026-03-10T06:39:58.274 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-requests-2.25.1-10.el9.noarch 25/138 2026-03-10T06:39:58.280 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libquadmath-11.5.0-14.el9.x86_64 26/138 2026-03-10T06:39:58.283 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libgfortran-11.5.0-14.el9.x86_64 27/138 2026-03-10T06:39:58.312 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ledmon-libs-1.1.0-3.el9.x86_64 28/138 2026-03-10T06:39:58.318 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 29/138 2026-03-10T06:39:58.328 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 30/138 2026-03-10T06:39:58.341 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 31/138 2026-03-10T06:39:58.349 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-requests-oauthlib-1.3.0-12.el9.noarch 32/138 2026-03-10T06:39:58.378 INFO:teuthology.orchestra.run.vm09.stdout: Installing : zip-3.0-35.el9.x86_64 33/138 2026-03-10T06:39:58.383 INFO:teuthology.orchestra.run.vm09.stdout: Installing : luarocks-3.9.2-5.el9.noarch 34/138 2026-03-10T06:39:58.391 INFO:teuthology.orchestra.run.vm09.stdout: Installing : lua-devel-5.4.4-4.el9.x86_64 35/138 2026-03-10T06:39:58.419 INFO:teuthology.orchestra.run.vm09.stdout: Installing : protobuf-compiler-3.14.0-17.el9.x86_64 36/138 2026-03-10T06:39:58.427 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 82/138 2026-03-10T06:39:58.458 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-scipy-1.9.3-2.el9.x86_64 83/138 2026-03-10T06:39:58.464 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libxslt-1.1.34-12.el9.x86_64 84/138 2026-03-10T06:39:58.469 INFO:teuthology.orchestra.run.vm01.stdout: Installing : xmlstarlet-1.6.1-20.el9.x86_64 85/138 2026-03-10T06:39:58.477 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-mako-1.1.4-6.el9.noarch 37/138 2026-03-10T06:39:58.493 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-pyasn1-modules-0.4.8-7.el9.noarch 38/138 2026-03-10T06:39:58.500 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-rsa-4.9-2.el9.noarch 39/138 2026-03-10T06:39:58.510 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jaraco-classes-3.2.1-5.el9.noarch 40/138 2026-03-10T06:39:58.515 INFO:teuthology.orchestra.run.vm09.stdout: Installing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 41/138 2026-03-10T06:39:58.520 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-zc-lockfile-2.0-10.el9.noarch 42/138 2026-03-10T06:39:58.537 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-xmltodict-0.12.0-15.el9.noarch 43/138 2026-03-10T06:39:58.563 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-websocket-client-1.2.3-2.el9.noarch 44/138 2026-03-10T06:39:58.570 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-webob-1.8.8-2.el9.noarch 45/138 2026-03-10T06:39:58.576 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-typing-extensions-4.15.0-1.el9.noarch 46/138 2026-03-10T06:39:58.590 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-repoze-lru-0.7-16.el9.noarch 47/138 2026-03-10T06:39:58.602 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-routes-2.5.1-5.el9.noarch 48/138 2026-03-10T06:39:58.615 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-natsort-7.1.1-5.el9.noarch 49/138 2026-03-10T06:39:58.629 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libpmemobj-1.12.1-1.el9.x86_64 86/138 2026-03-10T06:39:58.632 INFO:teuthology.orchestra.run.vm01.stdout: Upgrading : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 87/138 2026-03-10T06:39:58.665 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 87/138 2026-03-10T06:39:58.670 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 88/138 2026-03-10T06:39:58.680 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-logutils-0.3.5-21.el9.noarch 50/138 2026-03-10T06:39:58.682 INFO:teuthology.orchestra.run.vm01.stdout: Installing : boost-program-options-1.75.0-13.el9.x86_64 89/138 2026-03-10T06:39:58.689 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-pecan-1.4.2-3.el9.noarch 51/138 2026-03-10T06:39:58.700 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-certifi-2023.05.07-4.el9.noarch 52/138 2026-03-10T06:39:58.747 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-cachetools-4.2.4-1.el9.noarch 53/138 2026-03-10T06:39:58.935 INFO:teuthology.orchestra.run.vm01.stdout: Installing : parquet-libs-9.0.0-15.el9.x86_64 90/138 2026-03-10T06:39:58.938 INFO:teuthology.orchestra.run.vm01.stdout: Installing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 91/138 2026-03-10T06:39:58.956 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 91/138 2026-03-10T06:39:58.958 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 92/138 2026-03-10T06:39:59.114 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-google-auth-1:2.45.0-1.el9.noarch 54/138 2026-03-10T06:39:59.130 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-kubernetes-1:26.1.0-3.el9.noarch 55/138 2026-03-10T06:39:59.136 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-backports-tarfile-1.2.0-1.el9.noarch 56/138 2026-03-10T06:39:59.144 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jaraco-context-6.0.1-3.el9.noarch 57/138 2026-03-10T06:39:59.149 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-autocommand-2.2.2-8.el9.noarch 58/138 2026-03-10T06:39:59.158 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libunwind-1.6.2-1.el9.x86_64 59/138 2026-03-10T06:39:59.162 INFO:teuthology.orchestra.run.vm09.stdout: Installing : gperftools-libs-2.9.1-3.el9.x86_64 60/138 2026-03-10T06:39:59.166 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libarrow-doc-9.0.0-15.el9.noarch 61/138 2026-03-10T06:39:59.198 INFO:teuthology.orchestra.run.vm09.stdout: Installing : grpc-data-1.46.7-10.el9.noarch 62/138 2026-03-10T06:39:59.248 INFO:teuthology.orchestra.run.vm09.stdout: Installing : abseil-cpp-20211102.0-4.el9.x86_64 63/138 2026-03-10T06:39:59.261 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-grpcio-1.46.7-10.el9.x86_64 64/138 2026-03-10T06:39:59.269 INFO:teuthology.orchestra.run.vm09.stdout: Installing : socat-1.7.4.1-8.el9.x86_64 65/138 2026-03-10T06:39:59.274 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-toml-0.10.2-6.el9.noarch 66/138 2026-03-10T06:39:59.283 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jaraco-functools-3.5.0-2.el9.noarch 67/138 2026-03-10T06:39:59.288 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jaraco-text-4.0.0-2.el9.noarch 68/138 2026-03-10T06:39:59.298 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jaraco-collections-3.0.0-8.el9.noarch 69/138 2026-03-10T06:39:59.304 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-tempora-5.0.0-2.el9.noarch 70/138 2026-03-10T06:39:59.336 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-portend-3.1.0-2.el9.noarch 71/138 2026-03-10T06:39:59.349 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-protobuf-3.14.0-17.el9.noarch 72/138 2026-03-10T06:39:59.392 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-grpcio-tools-1.46.7-10.el9.x86_64 73/138 2026-03-10T06:39:59.646 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-devel-3.9.25-3.el9.x86_64 74/138 2026-03-10T06:39:59.677 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-babel-2.9.1-2.el9.noarch 75/138 2026-03-10T06:39:59.684 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jinja2-2.11.3-8.el9.noarch 76/138 2026-03-10T06:39:59.745 INFO:teuthology.orchestra.run.vm09.stdout: Installing : openblas-0.3.29-1.el9.x86_64 77/138 2026-03-10T06:39:59.748 INFO:teuthology.orchestra.run.vm09.stdout: Installing : openblas-openmp-0.3.29-1.el9.x86_64 78/138 2026-03-10T06:39:59.771 INFO:teuthology.orchestra.run.vm09.stdout: Installing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 79/138 2026-03-10T06:40:00.045 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-10T06:40:00.052 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-10T06:40:00.072 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-10T06:40:00.090 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-ply-3.11-14.el9.noarch 94/138 2026-03-10T06:40:00.110 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-pycparser-2.20-6.el9.noarch 95/138 2026-03-10T06:40:00.136 INFO:teuthology.orchestra.run.vm09.stdout: Installing : flexiblas-netlib-3.0.4-9.el9.x86_64 80/138 2026-03-10T06:40:00.198 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-cffi-1.14.5-5.el9.x86_64 96/138 2026-03-10T06:40:00.213 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-cryptography-36.0.1-5.el9.x86_64 97/138 2026-03-10T06:40:00.225 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-numpy-1:1.23.5-2.el9.x86_64 81/138 2026-03-10T06:40:00.242 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-pyOpenSSL-21.0.0-1.el9.noarch 98/138 2026-03-10T06:40:00.284 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-cheroot-10.0.1-4.el9.noarch 99/138 2026-03-10T06:40:00.347 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-cherrypy-18.6.1-2.el9.noarch 100/138 2026-03-10T06:40:00.357 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-asyncssh-2.13.2-5.el9.noarch 101/138 2026-03-10T06:40:00.362 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-bcrypt-3.2.2-1.el9.x86_64 102/138 2026-03-10T06:40:00.369 INFO:teuthology.orchestra.run.vm01.stdout: Installing : pciutils-3.7.0-7.el9.x86_64 103/138 2026-03-10T06:40:00.373 INFO:teuthology.orchestra.run.vm01.stdout: Installing : qatlib-25.08.0-2.el9.x86_64 104/138 2026-03-10T06:40:00.375 INFO:teuthology.orchestra.run.vm01.stdout: Installing : qatlib-service-25.08.0-2.el9.x86_64 105/138 2026-03-10T06:40:00.393 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 105/138 2026-03-10T06:40:00.706 INFO:teuthology.orchestra.run.vm01.stdout: Installing : qatzip-libs-1.3.1-1.el9.x86_64 106/138 2026-03-10T06:40:00.713 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 107/138 2026-03-10T06:40:00.754 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 107/138 2026-03-10T06:40:00.754 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /usr/lib/systemd/system/ceph.target. 2026-03-10T06:40:00.754 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /usr/lib/systemd/system/ceph-crash.service. 2026-03-10T06:40:00.754 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:40:00.760 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 108/138 2026-03-10T06:40:00.990 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 82/138 2026-03-10T06:40:01.019 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-scipy-1.9.3-2.el9.x86_64 83/138 2026-03-10T06:40:01.026 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libxslt-1.1.34-12.el9.x86_64 84/138 2026-03-10T06:40:01.031 INFO:teuthology.orchestra.run.vm09.stdout: Installing : xmlstarlet-1.6.1-20.el9.x86_64 85/138 2026-03-10T06:40:01.190 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libpmemobj-1.12.1-1.el9.x86_64 86/138 2026-03-10T06:40:01.193 INFO:teuthology.orchestra.run.vm09.stdout: Upgrading : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 87/138 2026-03-10T06:40:01.224 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 87/138 2026-03-10T06:40:01.228 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 88/138 2026-03-10T06:40:01.236 INFO:teuthology.orchestra.run.vm09.stdout: Installing : boost-program-options-1.75.0-13.el9.x86_64 89/138 2026-03-10T06:40:01.499 INFO:teuthology.orchestra.run.vm09.stdout: Installing : parquet-libs-9.0.0-15.el9.x86_64 90/138 2026-03-10T06:40:01.501 INFO:teuthology.orchestra.run.vm09.stdout: Installing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 91/138 2026-03-10T06:40:01.521 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 91/138 2026-03-10T06:40:01.523 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 92/138 2026-03-10T06:40:02.664 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-10T06:40:02.669 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-10T06:40:02.690 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-10T06:40:02.707 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-ply-3.11-14.el9.noarch 94/138 2026-03-10T06:40:02.727 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-pycparser-2.20-6.el9.noarch 95/138 2026-03-10T06:40:02.817 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-cffi-1.14.5-5.el9.x86_64 96/138 2026-03-10T06:40:02.833 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-cryptography-36.0.1-5.el9.x86_64 97/138 2026-03-10T06:40:02.862 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-pyOpenSSL-21.0.0-1.el9.noarch 98/138 2026-03-10T06:40:02.900 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-cheroot-10.0.1-4.el9.noarch 99/138 2026-03-10T06:40:02.963 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-cherrypy-18.6.1-2.el9.noarch 100/138 2026-03-10T06:40:02.972 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-asyncssh-2.13.2-5.el9.noarch 101/138 2026-03-10T06:40:02.979 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-bcrypt-3.2.2-1.el9.x86_64 102/138 2026-03-10T06:40:02.986 INFO:teuthology.orchestra.run.vm09.stdout: Installing : pciutils-3.7.0-7.el9.x86_64 103/138 2026-03-10T06:40:02.990 INFO:teuthology.orchestra.run.vm09.stdout: Installing : qatlib-25.08.0-2.el9.x86_64 104/138 2026-03-10T06:40:02.992 INFO:teuthology.orchestra.run.vm09.stdout: Installing : qatlib-service-25.08.0-2.el9.x86_64 105/138 2026-03-10T06:40:03.009 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 105/138 2026-03-10T06:40:03.318 INFO:teuthology.orchestra.run.vm09.stdout: Installing : qatzip-libs-1.3.1-1.el9.x86_64 106/138 2026-03-10T06:40:03.325 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 107/138 2026-03-10T06:40:03.374 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 107/138 2026-03-10T06:40:03.374 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /usr/lib/systemd/system/ceph.target. 2026-03-10T06:40:03.374 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /usr/lib/systemd/system/ceph-crash.service. 2026-03-10T06:40:03.374 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:40:03.380 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 108/138 2026-03-10T06:40:07.014 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 108/138 2026-03-10T06:40:07.015 INFO:teuthology.orchestra.run.vm01.stdout:skipping the directory /sys 2026-03-10T06:40:07.015 INFO:teuthology.orchestra.run.vm01.stdout:skipping the directory /proc 2026-03-10T06:40:07.015 INFO:teuthology.orchestra.run.vm01.stdout:skipping the directory /mnt 2026-03-10T06:40:07.015 INFO:teuthology.orchestra.run.vm01.stdout:skipping the directory /var/tmp 2026-03-10T06:40:07.015 INFO:teuthology.orchestra.run.vm01.stdout:skipping the directory /home 2026-03-10T06:40:07.015 INFO:teuthology.orchestra.run.vm01.stdout:skipping the directory /root 2026-03-10T06:40:07.015 INFO:teuthology.orchestra.run.vm01.stdout:skipping the directory /tmp 2026-03-10T06:40:07.015 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:40:07.138 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 109/138 2026-03-10T06:40:07.161 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 109/138 2026-03-10T06:40:07.161 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:40:07.161 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-10T06:40:07.161 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-10T06:40:07.161 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-10T06:40:07.161 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:40:07.393 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 110/138 2026-03-10T06:40:07.414 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 110/138 2026-03-10T06:40:07.414 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:40:07.414 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-10T06:40:07.414 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-10T06:40:07.414 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-10T06:40:07.414 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:40:07.422 INFO:teuthology.orchestra.run.vm01.stdout: Installing : mailcap-2.1.49-5.el9.noarch 111/138 2026-03-10T06:40:07.425 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libconfig-1.7.2-9.el9.x86_64 112/138 2026-03-10T06:40:07.442 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-10T06:40:07.442 INFO:teuthology.orchestra.run.vm01.stdout:Creating group 'qat' with GID 994. 2026-03-10T06:40:07.442 INFO:teuthology.orchestra.run.vm01.stdout:Creating group 'libstoragemgmt' with GID 993. 2026-03-10T06:40:07.442 INFO:teuthology.orchestra.run.vm01.stdout:Creating user 'libstoragemgmt' (daemon account for libstoragemgmt) with UID 993 and GID 993. 2026-03-10T06:40:07.442 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:40:07.454 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-10T06:40:07.479 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-10T06:40:07.479 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/libstoragemgmt.service → /usr/lib/systemd/system/libstoragemgmt.service. 2026-03-10T06:40:07.479 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:40:07.525 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 114/138 2026-03-10T06:40:07.602 INFO:teuthology.orchestra.run.vm01.stdout: Installing : cryptsetup-2.8.1-3.el9.x86_64 115/138 2026-03-10T06:40:07.608 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 116/138 2026-03-10T06:40:07.620 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 116/138 2026-03-10T06:40:07.620 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:40:07.620 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-10T06:40:07.620 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:40:08.432 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 117/138 2026-03-10T06:40:08.454 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 117/138 2026-03-10T06:40:08.454 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:40:08.455 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-10T06:40:08.455 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-10T06:40:08.455 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-10T06:40:08.455 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:40:08.689 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 118/138 2026-03-10T06:40:08.693 INFO:teuthology.orchestra.run.vm01.stdout: Installing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 118/138 2026-03-10T06:40:08.701 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 119/138 2026-03-10T06:40:08.728 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 120/138 2026-03-10T06:40:08.731 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 121/138 2026-03-10T06:40:09.272 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 121/138 2026-03-10T06:40:09.278 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 122/138 2026-03-10T06:40:09.651 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 108/138 2026-03-10T06:40:09.651 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /sys 2026-03-10T06:40:09.651 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /proc 2026-03-10T06:40:09.651 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /mnt 2026-03-10T06:40:09.651 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /var/tmp 2026-03-10T06:40:09.651 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /home 2026-03-10T06:40:09.651 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /root 2026-03-10T06:40:09.651 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /tmp 2026-03-10T06:40:09.651 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:40:09.774 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 109/138 2026-03-10T06:40:09.790 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 122/138 2026-03-10T06:40:09.793 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 123/138 2026-03-10T06:40:09.796 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 109/138 2026-03-10T06:40:09.796 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:40:09.796 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-10T06:40:09.796 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-10T06:40:09.796 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-10T06:40:09.796 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:40:09.852 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 123/138 2026-03-10T06:40:09.911 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 124/138 2026-03-10T06:40:09.913 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 125/138 2026-03-10T06:40:09.932 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 125/138 2026-03-10T06:40:09.933 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:40:09.933 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-10T06:40:09.933 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-10T06:40:09.933 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-10T06:40:09.933 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:40:09.947 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 126/138 2026-03-10T06:40:09.957 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 126/138 2026-03-10T06:40:10.029 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 110/138 2026-03-10T06:40:10.048 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 110/138 2026-03-10T06:40:10.048 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:40:10.048 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-10T06:40:10.048 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-10T06:40:10.048 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-10T06:40:10.048 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:40:10.056 INFO:teuthology.orchestra.run.vm09.stdout: Installing : mailcap-2.1.49-5.el9.noarch 111/138 2026-03-10T06:40:10.059 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libconfig-1.7.2-9.el9.x86_64 112/138 2026-03-10T06:40:10.075 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-10T06:40:10.075 INFO:teuthology.orchestra.run.vm09.stdout:Creating group 'qat' with GID 994. 2026-03-10T06:40:10.075 INFO:teuthology.orchestra.run.vm09.stdout:Creating group 'libstoragemgmt' with GID 993. 2026-03-10T06:40:10.075 INFO:teuthology.orchestra.run.vm09.stdout:Creating user 'libstoragemgmt' (daemon account for libstoragemgmt) with UID 993 and GID 993. 2026-03-10T06:40:10.075 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:40:10.086 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-10T06:40:10.116 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-10T06:40:10.116 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/libstoragemgmt.service → /usr/lib/systemd/system/libstoragemgmt.service. 2026-03-10T06:40:10.116 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:40:10.159 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 114/138 2026-03-10T06:40:10.234 INFO:teuthology.orchestra.run.vm09.stdout: Installing : cryptsetup-2.8.1-3.el9.x86_64 115/138 2026-03-10T06:40:10.240 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 116/138 2026-03-10T06:40:10.252 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 116/138 2026-03-10T06:40:10.252 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:40:10.252 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-10T06:40:10.252 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:40:10.474 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 127/138 2026-03-10T06:40:10.479 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 128/138 2026-03-10T06:40:10.498 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 128/138 2026-03-10T06:40:10.498 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:40:10.498 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-10T06:40:10.498 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-10T06:40:10.498 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-10T06:40:10.498 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:40:10.509 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 129/138 2026-03-10T06:40:10.527 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 129/138 2026-03-10T06:40:10.527 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:40:10.527 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-10T06:40:10.527 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:40:10.682 INFO:teuthology.orchestra.run.vm01.stdout: Installing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 130/138 2026-03-10T06:40:10.702 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 130/138 2026-03-10T06:40:10.702 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:40:10.702 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-10T06:40:10.702 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-10T06:40:10.702 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-10T06:40:10.702 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:40:11.051 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 117/138 2026-03-10T06:40:11.074 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 117/138 2026-03-10T06:40:11.074 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:40:11.074 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-10T06:40:11.074 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-10T06:40:11.074 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-10T06:40:11.075 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:40:11.131 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 118/138 2026-03-10T06:40:11.134 INFO:teuthology.orchestra.run.vm09.stdout: Installing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 118/138 2026-03-10T06:40:11.141 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 119/138 2026-03-10T06:40:11.163 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 120/138 2026-03-10T06:40:11.167 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 121/138 2026-03-10T06:40:11.714 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 121/138 2026-03-10T06:40:11.721 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 122/138 2026-03-10T06:40:12.243 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 122/138 2026-03-10T06:40:12.246 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 123/138 2026-03-10T06:40:12.304 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 123/138 2026-03-10T06:40:12.361 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 124/138 2026-03-10T06:40:12.364 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 125/138 2026-03-10T06:40:12.384 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 125/138 2026-03-10T06:40:12.384 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:40:12.384 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-10T06:40:12.384 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-10T06:40:12.384 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-10T06:40:12.384 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:40:12.398 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 126/138 2026-03-10T06:40:12.408 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 126/138 2026-03-10T06:40:12.922 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 127/138 2026-03-10T06:40:12.927 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 128/138 2026-03-10T06:40:12.947 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 128/138 2026-03-10T06:40:12.947 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:40:12.947 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-10T06:40:12.947 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-10T06:40:12.947 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-10T06:40:12.947 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:40:12.958 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 129/138 2026-03-10T06:40:12.978 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 129/138 2026-03-10T06:40:12.979 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:40:12.979 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-10T06:40:12.979 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:40:13.138 INFO:teuthology.orchestra.run.vm09.stdout: Installing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 130/138 2026-03-10T06:40:13.160 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 130/138 2026-03-10T06:40:13.160 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:40:13.160 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-10T06:40:13.160 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-10T06:40:13.160 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-10T06:40:13.160 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:40:13.251 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 131/138 2026-03-10T06:40:13.262 INFO:teuthology.orchestra.run.vm01.stdout: Installing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 132/138 2026-03-10T06:40:13.267 INFO:teuthology.orchestra.run.vm01.stdout: Installing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 133/138 2026-03-10T06:40:13.323 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 134/138 2026-03-10T06:40:13.334 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 135/138 2026-03-10T06:40:13.339 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-jmespath-1.0.1-1.el9.noarch 136/138 2026-03-10T06:40:13.339 INFO:teuthology.orchestra.run.vm01.stdout: Cleanup : librbd1-2:16.2.4-5.el9.x86_64 137/138 2026-03-10T06:40:13.355 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: librbd1-2:16.2.4-5.el9.x86_64 137/138 2026-03-10T06:40:13.355 INFO:teuthology.orchestra.run.vm01.stdout: Cleanup : librados2-2:16.2.4-5.el9.x86_64 138/138 2026-03-10T06:40:14.689 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: librados2-2:16.2.4-5.el9.x86_64 138/138 2026-03-10T06:40:14.689 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/138 2026-03-10T06:40:14.689 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/138 2026-03-10T06:40:14.689 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/138 2026-03-10T06:40:14.689 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-10T06:40:14.689 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/138 2026-03-10T06:40:14.689 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 6/138 2026-03-10T06:40:14.689 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 7/138 2026-03-10T06:40:14.689 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/138 2026-03-10T06:40:14.689 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 9/138 2026-03-10T06:40:14.689 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 10/138 2026-03-10T06:40:14.689 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-10T06:40:14.689 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 12/138 2026-03-10T06:40:14.689 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 13/138 2026-03-10T06:40:14.689 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 14/138 2026-03-10T06:40:14.689 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 15/138 2026-03-10T06:40:14.689 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 16/138 2026-03-10T06:40:14.689 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 17/138 2026-03-10T06:40:14.689 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 18/138 2026-03-10T06:40:14.689 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 19/138 2026-03-10T06:40:14.689 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 20/138 2026-03-10T06:40:14.690 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 21/138 2026-03-10T06:40:14.690 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 22/138 2026-03-10T06:40:14.690 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 23/138 2026-03-10T06:40:14.690 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 24/138 2026-03-10T06:40:14.690 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 25/138 2026-03-10T06:40:14.690 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 26/138 2026-03-10T06:40:14.690 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 27/138 2026-03-10T06:40:14.690 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 28/138 2026-03-10T06:40:14.690 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 29/138 2026-03-10T06:40:14.690 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 30/138 2026-03-10T06:40:14.690 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 31/138 2026-03-10T06:40:14.690 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 32/138 2026-03-10T06:40:14.690 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 33/138 2026-03-10T06:40:14.690 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 34/138 2026-03-10T06:40:14.690 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 35/138 2026-03-10T06:40:14.690 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 36/138 2026-03-10T06:40:14.690 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 37/138 2026-03-10T06:40:14.690 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 38/138 2026-03-10T06:40:14.690 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 39/138 2026-03-10T06:40:14.690 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 40/138 2026-03-10T06:40:14.690 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 41/138 2026-03-10T06:40:14.690 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 42/138 2026-03-10T06:40:14.690 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 43/138 2026-03-10T06:40:14.690 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/138 2026-03-10T06:40:14.690 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 45/138 2026-03-10T06:40:14.690 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-ply-3.11-14.el9.noarch 46/138 2026-03-10T06:40:14.690 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 47/138 2026-03-10T06:40:14.690 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 48/138 2026-03-10T06:40:14.690 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 49/138 2026-03-10T06:40:14.690 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : unzip-6.0-59.el9.x86_64 50/138 2026-03-10T06:40:14.690 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : zip-3.0-35.el9.x86_64 51/138 2026-03-10T06:40:14.690 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 52/138 2026-03-10T06:40:14.690 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 53/138 2026-03-10T06:40:14.692 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 54/138 2026-03-10T06:40:14.692 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 55/138 2026-03-10T06:40:14.692 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 56/138 2026-03-10T06:40:14.692 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 57/138 2026-03-10T06:40:14.692 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 58/138 2026-03-10T06:40:14.692 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 59/138 2026-03-10T06:40:14.692 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 60/138 2026-03-10T06:40:14.692 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 61/138 2026-03-10T06:40:14.692 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 62/138 2026-03-10T06:40:14.692 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : lua-5.4.4-4.el9.x86_64 63/138 2026-03-10T06:40:14.692 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 64/138 2026-03-10T06:40:14.692 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 65/138 2026-03-10T06:40:14.692 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 66/138 2026-03-10T06:40:14.692 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 67/138 2026-03-10T06:40:14.692 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 68/138 2026-03-10T06:40:14.692 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 69/138 2026-03-10T06:40:14.692 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jmespath-1.0.1-1.el9.noarch 70/138 2026-03-10T06:40:14.692 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 71/138 2026-03-10T06:40:14.692 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 72/138 2026-03-10T06:40:14.692 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 73/138 2026-03-10T06:40:14.692 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 74/138 2026-03-10T06:40:14.692 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 75/138 2026-03-10T06:40:14.692 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 76/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 77/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 78/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 79/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 80/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 81/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 82/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 83/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 84/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 85/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 86/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 87/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 88/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 89/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 90/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 91/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 92/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 93/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 94/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 95/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 96/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 97/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 98/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 99/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 100/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 101/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 102/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 103/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 104/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 105/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 106/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 107/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 108/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 109/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 110/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 111/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 112/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 113/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 114/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 115/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 116/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 117/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 118/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 119/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 120/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 121/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 122/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 123/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 124/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 125/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 126/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 127/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 128/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 129/138 2026-03-10T06:40:14.693 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 130/138 2026-03-10T06:40:14.694 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-xmltodict-0.12.0-15.el9.noarch 131/138 2026-03-10T06:40:14.694 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 132/138 2026-03-10T06:40:14.694 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : re2-1:20211101-20.el9.x86_64 133/138 2026-03-10T06:40:14.694 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 134/138 2026-03-10T06:40:14.694 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 135/138 2026-03-10T06:40:14.694 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librados2-2:16.2.4-5.el9.x86_64 136/138 2026-03-10T06:40:14.694 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 137/138 2026-03-10T06:40:14.790 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librbd1-2:16.2.4-5.el9.x86_64 138/138 2026-03-10T06:40:14.790 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:40:14.790 INFO:teuthology.orchestra.run.vm01.stdout:Upgraded: 2026-03-10T06:40:14.790 INFO:teuthology.orchestra.run.vm01.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:14.790 INFO:teuthology.orchestra.run.vm01.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:14.790 INFO:teuthology.orchestra.run.vm01.stdout:Installed: 2026-03-10T06:40:14.790 INFO:teuthology.orchestra.run.vm01.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-10T06:40:14.790 INFO:teuthology.orchestra.run.vm01.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-10T06:40:14.790 INFO:teuthology.orchestra.run.vm01.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:14.790 INFO:teuthology.orchestra.run.vm01.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:14.790 INFO:teuthology.orchestra.run.vm01.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:14.790 INFO:teuthology.orchestra.run.vm01.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:14.790 INFO:teuthology.orchestra.run.vm01.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:40:14.790 INFO:teuthology.orchestra.run.vm01.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:14.790 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:14.790 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:14.791 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:40:14.791 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:40:14.791 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:40:14.791 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:40:14.791 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:40:14.791 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:14.791 INFO:teuthology.orchestra.run.vm01.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:14.791 INFO:teuthology.orchestra.run.vm01.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:40:14.791 INFO:teuthology.orchestra.run.vm01.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:14.791 INFO:teuthology.orchestra.run.vm01.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:14.791 INFO:teuthology.orchestra.run.vm01.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:14.791 INFO:teuthology.orchestra.run.vm01.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:40:14.791 INFO:teuthology.orchestra.run.vm01.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:40:14.791 INFO:teuthology.orchestra.run.vm01.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-10T06:40:14.791 INFO:teuthology.orchestra.run.vm01.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-10T06:40:14.791 INFO:teuthology.orchestra.run.vm01.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-10T06:40:14.791 INFO:teuthology.orchestra.run.vm01.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-10T06:40:14.791 INFO:teuthology.orchestra.run.vm01.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-10T06:40:14.791 INFO:teuthology.orchestra.run.vm01.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-10T06:40:14.791 INFO:teuthology.orchestra.run.vm01.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-10T06:40:14.791 INFO:teuthology.orchestra.run.vm01.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-10T06:40:14.791 INFO:teuthology.orchestra.run.vm01.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-10T06:40:14.791 INFO:teuthology.orchestra.run.vm01.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:14.791 INFO:teuthology.orchestra.run.vm01.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:14.791 INFO:teuthology.orchestra.run.vm01.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:14.791 INFO:teuthology.orchestra.run.vm01.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-10T06:40:14.791 INFO:teuthology.orchestra.run.vm01.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-10T06:40:14.791 INFO:teuthology.orchestra.run.vm01.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-10T06:40:14.791 INFO:teuthology.orchestra.run.vm01.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-10T06:40:14.791 INFO:teuthology.orchestra.run.vm01.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-10T06:40:14.791 INFO:teuthology.orchestra.run.vm01.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-10T06:40:14.791 INFO:teuthology.orchestra.run.vm01.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-10T06:40:14.791 INFO:teuthology.orchestra.run.vm01.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:14.791 INFO:teuthology.orchestra.run.vm01.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:14.791 INFO:teuthology.orchestra.run.vm01.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-10T06:40:14.791 INFO:teuthology.orchestra.run.vm01.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:14.791 INFO:teuthology.orchestra.run.vm01.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T06:40:14.791 INFO:teuthology.orchestra.run.vm01.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-10T06:40:14.791 INFO:teuthology.orchestra.run.vm01.stdout: libxslt-1.1.34-12.el9.x86_64 2026-03-10T06:40:14.791 INFO:teuthology.orchestra.run.vm01.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-10T06:40:14.791 INFO:teuthology.orchestra.run.vm01.stdout: lua-5.4.4-4.el9.x86_64 2026-03-10T06:40:14.791 INFO:teuthology.orchestra.run.vm01.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-10T06:40:14.791 INFO:teuthology.orchestra.run.vm01.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-10T06:40:14.791 INFO:teuthology.orchestra.run.vm01.stdout: mailcap-2.1.49-5.el9.noarch 2026-03-10T06:40:14.791 INFO:teuthology.orchestra.run.vm01.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-10T06:40:14.791 INFO:teuthology.orchestra.run.vm01.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-10T06:40:14.791 INFO:teuthology.orchestra.run.vm01.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-10T06:40:14.791 INFO:teuthology.orchestra.run.vm01.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-jmespath-1.0.1-1.el9.noarch 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-ply-3.11-14.el9.noarch 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-10T06:40:14.792 INFO:teuthology.orchestra.run.vm01.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-10T06:40:14.793 INFO:teuthology.orchestra.run.vm01.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-10T06:40:14.793 INFO:teuthology.orchestra.run.vm01.stdout: python3-xmltodict-0.12.0-15.el9.noarch 2026-03-10T06:40:14.793 INFO:teuthology.orchestra.run.vm01.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-10T06:40:14.793 INFO:teuthology.orchestra.run.vm01.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-10T06:40:14.793 INFO:teuthology.orchestra.run.vm01.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-10T06:40:14.793 INFO:teuthology.orchestra.run.vm01.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-10T06:40:14.793 INFO:teuthology.orchestra.run.vm01.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:14.793 INFO:teuthology.orchestra.run.vm01.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:14.793 INFO:teuthology.orchestra.run.vm01.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:14.793 INFO:teuthology.orchestra.run.vm01.stdout: re2-1:20211101-20.el9.x86_64 2026-03-10T06:40:14.793 INFO:teuthology.orchestra.run.vm01.stdout: socat-1.7.4.1-8.el9.x86_64 2026-03-10T06:40:14.793 INFO:teuthology.orchestra.run.vm01.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-10T06:40:14.793 INFO:teuthology.orchestra.run.vm01.stdout: unzip-6.0-59.el9.x86_64 2026-03-10T06:40:14.793 INFO:teuthology.orchestra.run.vm01.stdout: xmlstarlet-1.6.1-20.el9.x86_64 2026-03-10T06:40:14.793 INFO:teuthology.orchestra.run.vm01.stdout: zip-3.0-35.el9.x86_64 2026-03-10T06:40:14.793 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:40:14.793 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T06:40:14.876 DEBUG:teuthology.parallel:result is None 2026-03-10T06:40:15.788 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 131/138 2026-03-10T06:40:15.799 INFO:teuthology.orchestra.run.vm09.stdout: Installing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 132/138 2026-03-10T06:40:15.804 INFO:teuthology.orchestra.run.vm09.stdout: Installing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 133/138 2026-03-10T06:40:15.860 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 134/138 2026-03-10T06:40:15.869 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 135/138 2026-03-10T06:40:15.874 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jmespath-1.0.1-1.el9.noarch 136/138 2026-03-10T06:40:15.874 INFO:teuthology.orchestra.run.vm09.stdout: Cleanup : librbd1-2:16.2.4-5.el9.x86_64 137/138 2026-03-10T06:40:15.889 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librbd1-2:16.2.4-5.el9.x86_64 137/138 2026-03-10T06:40:15.889 INFO:teuthology.orchestra.run.vm09.stdout: Cleanup : librados2-2:16.2.4-5.el9.x86_64 138/138 2026-03-10T06:40:17.254 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librados2-2:16.2.4-5.el9.x86_64 138/138 2026-03-10T06:40:17.254 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/138 2026-03-10T06:40:17.254 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/138 2026-03-10T06:40:17.254 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/138 2026-03-10T06:40:17.254 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-10T06:40:17.254 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/138 2026-03-10T06:40:17.254 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 6/138 2026-03-10T06:40:17.255 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 7/138 2026-03-10T06:40:17.255 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/138 2026-03-10T06:40:17.255 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 9/138 2026-03-10T06:40:17.255 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 10/138 2026-03-10T06:40:17.255 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-10T06:40:17.255 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 12/138 2026-03-10T06:40:17.255 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 13/138 2026-03-10T06:40:17.255 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 14/138 2026-03-10T06:40:17.255 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 15/138 2026-03-10T06:40:17.255 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 16/138 2026-03-10T06:40:17.255 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 17/138 2026-03-10T06:40:17.255 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 18/138 2026-03-10T06:40:17.255 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 19/138 2026-03-10T06:40:17.255 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 20/138 2026-03-10T06:40:17.255 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 21/138 2026-03-10T06:40:17.255 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 22/138 2026-03-10T06:40:17.255 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 23/138 2026-03-10T06:40:17.255 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 24/138 2026-03-10T06:40:17.255 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 25/138 2026-03-10T06:40:17.256 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 26/138 2026-03-10T06:40:17.256 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 27/138 2026-03-10T06:40:17.256 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 28/138 2026-03-10T06:40:17.256 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 29/138 2026-03-10T06:40:17.256 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 30/138 2026-03-10T06:40:17.256 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 31/138 2026-03-10T06:40:17.256 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 32/138 2026-03-10T06:40:17.256 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 33/138 2026-03-10T06:40:17.256 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 34/138 2026-03-10T06:40:17.256 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 35/138 2026-03-10T06:40:17.256 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 36/138 2026-03-10T06:40:17.256 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 37/138 2026-03-10T06:40:17.256 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 38/138 2026-03-10T06:40:17.256 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 39/138 2026-03-10T06:40:17.256 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 40/138 2026-03-10T06:40:17.256 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 41/138 2026-03-10T06:40:17.256 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 42/138 2026-03-10T06:40:17.256 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 43/138 2026-03-10T06:40:17.256 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/138 2026-03-10T06:40:17.256 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 45/138 2026-03-10T06:40:17.256 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-ply-3.11-14.el9.noarch 46/138 2026-03-10T06:40:17.256 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 47/138 2026-03-10T06:40:17.256 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 48/138 2026-03-10T06:40:17.256 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 49/138 2026-03-10T06:40:17.256 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : unzip-6.0-59.el9.x86_64 50/138 2026-03-10T06:40:17.256 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : zip-3.0-35.el9.x86_64 51/138 2026-03-10T06:40:17.256 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 52/138 2026-03-10T06:40:17.256 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 53/138 2026-03-10T06:40:17.257 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 54/138 2026-03-10T06:40:17.257 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 55/138 2026-03-10T06:40:17.257 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 56/138 2026-03-10T06:40:17.257 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 57/138 2026-03-10T06:40:17.257 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 58/138 2026-03-10T06:40:17.257 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 59/138 2026-03-10T06:40:17.257 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 60/138 2026-03-10T06:40:17.257 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 61/138 2026-03-10T06:40:17.257 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 62/138 2026-03-10T06:40:17.257 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : lua-5.4.4-4.el9.x86_64 63/138 2026-03-10T06:40:17.257 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 64/138 2026-03-10T06:40:17.257 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 65/138 2026-03-10T06:40:17.257 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 66/138 2026-03-10T06:40:17.257 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 67/138 2026-03-10T06:40:17.257 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 68/138 2026-03-10T06:40:17.257 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 69/138 2026-03-10T06:40:17.257 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jmespath-1.0.1-1.el9.noarch 70/138 2026-03-10T06:40:17.257 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 71/138 2026-03-10T06:40:17.257 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 72/138 2026-03-10T06:40:17.257 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 73/138 2026-03-10T06:40:17.257 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 74/138 2026-03-10T06:40:17.257 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 75/138 2026-03-10T06:40:17.257 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 76/138 2026-03-10T06:40:17.257 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 77/138 2026-03-10T06:40:17.257 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 78/138 2026-03-10T06:40:17.257 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 79/138 2026-03-10T06:40:17.257 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 80/138 2026-03-10T06:40:17.257 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 81/138 2026-03-10T06:40:17.257 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 82/138 2026-03-10T06:40:17.257 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 83/138 2026-03-10T06:40:17.257 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 84/138 2026-03-10T06:40:17.258 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 85/138 2026-03-10T06:40:17.258 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 86/138 2026-03-10T06:40:17.258 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 87/138 2026-03-10T06:40:17.258 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 88/138 2026-03-10T06:40:17.258 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 89/138 2026-03-10T06:40:17.258 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 90/138 2026-03-10T06:40:17.258 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 91/138 2026-03-10T06:40:17.258 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 92/138 2026-03-10T06:40:17.258 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 93/138 2026-03-10T06:40:17.258 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 94/138 2026-03-10T06:40:17.258 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 95/138 2026-03-10T06:40:17.258 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 96/138 2026-03-10T06:40:17.258 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 97/138 2026-03-10T06:40:17.258 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 98/138 2026-03-10T06:40:17.258 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 99/138 2026-03-10T06:40:17.258 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 100/138 2026-03-10T06:40:17.258 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 101/138 2026-03-10T06:40:17.258 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 102/138 2026-03-10T06:40:17.258 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 103/138 2026-03-10T06:40:17.258 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 104/138 2026-03-10T06:40:17.258 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 105/138 2026-03-10T06:40:17.258 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 106/138 2026-03-10T06:40:17.258 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 107/138 2026-03-10T06:40:17.258 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 108/138 2026-03-10T06:40:17.258 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 109/138 2026-03-10T06:40:17.258 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 110/138 2026-03-10T06:40:17.258 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 111/138 2026-03-10T06:40:17.258 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 112/138 2026-03-10T06:40:17.258 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 113/138 2026-03-10T06:40:17.259 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 114/138 2026-03-10T06:40:17.259 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 115/138 2026-03-10T06:40:17.259 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 116/138 2026-03-10T06:40:17.259 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 117/138 2026-03-10T06:40:17.259 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 118/138 2026-03-10T06:40:17.259 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 119/138 2026-03-10T06:40:17.259 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 120/138 2026-03-10T06:40:17.259 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 121/138 2026-03-10T06:40:17.259 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 122/138 2026-03-10T06:40:17.259 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 123/138 2026-03-10T06:40:17.259 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 124/138 2026-03-10T06:40:17.259 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 125/138 2026-03-10T06:40:17.259 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 126/138 2026-03-10T06:40:17.259 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 127/138 2026-03-10T06:40:17.259 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 128/138 2026-03-10T06:40:17.259 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 129/138 2026-03-10T06:40:17.259 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 130/138 2026-03-10T06:40:17.259 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-xmltodict-0.12.0-15.el9.noarch 131/138 2026-03-10T06:40:17.259 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 132/138 2026-03-10T06:40:17.259 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : re2-1:20211101-20.el9.x86_64 133/138 2026-03-10T06:40:17.259 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 134/138 2026-03-10T06:40:17.259 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 135/138 2026-03-10T06:40:17.259 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librados2-2:16.2.4-5.el9.x86_64 136/138 2026-03-10T06:40:17.259 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 137/138 2026-03-10T06:40:17.355 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librbd1-2:16.2.4-5.el9.x86_64 138/138 2026-03-10T06:40:17.355 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:40:17.355 INFO:teuthology.orchestra.run.vm09.stdout:Upgraded: 2026-03-10T06:40:17.355 INFO:teuthology.orchestra.run.vm09.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:17.355 INFO:teuthology.orchestra.run.vm09.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:17.355 INFO:teuthology.orchestra.run.vm09.stdout:Installed: 2026-03-10T06:40:17.355 INFO:teuthology.orchestra.run.vm09.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-10T06:40:17.355 INFO:teuthology.orchestra.run.vm09.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-10T06:40:17.355 INFO:teuthology.orchestra.run.vm09.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:17.355 INFO:teuthology.orchestra.run.vm09.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:17.355 INFO:teuthology.orchestra.run.vm09.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:17.355 INFO:teuthology.orchestra.run.vm09.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:17.355 INFO:teuthology.orchestra.run.vm09.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:40:17.355 INFO:teuthology.orchestra.run.vm09.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:17.355 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:17.355 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:17.355 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:40:17.355 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:40:17.355 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:40:17.355 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:40:17.355 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:40:17.355 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:17.355 INFO:teuthology.orchestra.run.vm09.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:17.355 INFO:teuthology.orchestra.run.vm09.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:40:17.355 INFO:teuthology.orchestra.run.vm09.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:17.355 INFO:teuthology.orchestra.run.vm09.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:17.355 INFO:teuthology.orchestra.run.vm09.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:17.355 INFO:teuthology.orchestra.run.vm09.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:40:17.355 INFO:teuthology.orchestra.run.vm09.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:40:17.355 INFO:teuthology.orchestra.run.vm09.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-10T06:40:17.355 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-10T06:40:17.355 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-10T06:40:17.355 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-10T06:40:17.355 INFO:teuthology.orchestra.run.vm09.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-10T06:40:17.355 INFO:teuthology.orchestra.run.vm09.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-10T06:40:17.355 INFO:teuthology.orchestra.run.vm09.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-10T06:40:17.355 INFO:teuthology.orchestra.run.vm09.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-10T06:40:17.355 INFO:teuthology.orchestra.run.vm09.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-10T06:40:17.355 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:17.355 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:17.355 INFO:teuthology.orchestra.run.vm09.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: libxslt-1.1.34-12.el9.x86_64 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: lua-5.4.4-4.el9.x86_64 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: mailcap-2.1.49-5.el9.noarch 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: python3-jmespath-1.0.1-1.el9.noarch 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-10T06:40:17.356 INFO:teuthology.orchestra.run.vm09.stdout: python3-ply-3.11-14.el9.noarch 2026-03-10T06:40:17.357 INFO:teuthology.orchestra.run.vm09.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-10T06:40:17.357 INFO:teuthology.orchestra.run.vm09.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-10T06:40:17.357 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-10T06:40:17.357 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-10T06:40:17.357 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-10T06:40:17.357 INFO:teuthology.orchestra.run.vm09.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-10T06:40:17.357 INFO:teuthology.orchestra.run.vm09.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:17.357 INFO:teuthology.orchestra.run.vm09.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:17.357 INFO:teuthology.orchestra.run.vm09.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-10T06:40:17.357 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-10T06:40:17.357 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-10T06:40:17.357 INFO:teuthology.orchestra.run.vm09.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:17.357 INFO:teuthology.orchestra.run.vm09.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-10T06:40:17.357 INFO:teuthology.orchestra.run.vm09.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-10T06:40:17.357 INFO:teuthology.orchestra.run.vm09.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-10T06:40:17.357 INFO:teuthology.orchestra.run.vm09.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-10T06:40:17.357 INFO:teuthology.orchestra.run.vm09.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-10T06:40:17.357 INFO:teuthology.orchestra.run.vm09.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-10T06:40:17.357 INFO:teuthology.orchestra.run.vm09.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-10T06:40:17.357 INFO:teuthology.orchestra.run.vm09.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-10T06:40:17.357 INFO:teuthology.orchestra.run.vm09.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-10T06:40:17.357 INFO:teuthology.orchestra.run.vm09.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-10T06:40:17.357 INFO:teuthology.orchestra.run.vm09.stdout: python3-xmltodict-0.12.0-15.el9.noarch 2026-03-10T06:40:17.357 INFO:teuthology.orchestra.run.vm09.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-10T06:40:17.357 INFO:teuthology.orchestra.run.vm09.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-10T06:40:17.357 INFO:teuthology.orchestra.run.vm09.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-10T06:40:17.357 INFO:teuthology.orchestra.run.vm09.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-10T06:40:17.357 INFO:teuthology.orchestra.run.vm09.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:17.357 INFO:teuthology.orchestra.run.vm09.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:17.357 INFO:teuthology.orchestra.run.vm09.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:40:17.357 INFO:teuthology.orchestra.run.vm09.stdout: re2-1:20211101-20.el9.x86_64 2026-03-10T06:40:17.357 INFO:teuthology.orchestra.run.vm09.stdout: socat-1.7.4.1-8.el9.x86_64 2026-03-10T06:40:17.357 INFO:teuthology.orchestra.run.vm09.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-10T06:40:17.357 INFO:teuthology.orchestra.run.vm09.stdout: unzip-6.0-59.el9.x86_64 2026-03-10T06:40:17.357 INFO:teuthology.orchestra.run.vm09.stdout: xmlstarlet-1.6.1-20.el9.x86_64 2026-03-10T06:40:17.357 INFO:teuthology.orchestra.run.vm09.stdout: zip-3.0-35.el9.x86_64 2026-03-10T06:40:17.357 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:40:17.357 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T06:40:17.439 DEBUG:teuthology.parallel:result is None 2026-03-10T06:40:17.439 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T06:40:18.023 DEBUG:teuthology.orchestra.run.vm01:> rpm -q ceph --qf '%{VERSION}-%{RELEASE}' 2026-03-10T06:40:18.042 INFO:teuthology.orchestra.run.vm01.stdout:19.2.3-678.ge911bdeb.el9 2026-03-10T06:40:18.042 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678.ge911bdeb.el9 2026-03-10T06:40:18.042 INFO:teuthology.task.install:The correct ceph version 19.2.3-678.ge911bdeb is installed. 2026-03-10T06:40:18.043 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T06:40:18.658 DEBUG:teuthology.orchestra.run.vm08:> rpm -q ceph --qf '%{VERSION}-%{RELEASE}' 2026-03-10T06:40:18.677 INFO:teuthology.orchestra.run.vm08.stdout:19.2.3-678.ge911bdeb.el9 2026-03-10T06:40:18.677 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678.ge911bdeb.el9 2026-03-10T06:40:18.677 INFO:teuthology.task.install:The correct ceph version 19.2.3-678.ge911bdeb is installed. 2026-03-10T06:40:18.678 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T06:40:19.275 DEBUG:teuthology.orchestra.run.vm09:> rpm -q ceph --qf '%{VERSION}-%{RELEASE}' 2026-03-10T06:40:19.294 INFO:teuthology.orchestra.run.vm09.stdout:19.2.3-678.ge911bdeb.el9 2026-03-10T06:40:19.294 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678.ge911bdeb.el9 2026-03-10T06:40:19.294 INFO:teuthology.task.install:The correct ceph version 19.2.3-678.ge911bdeb is installed. 2026-03-10T06:40:19.295 INFO:teuthology.task.install.util:Shipping valgrind.supp... 2026-03-10T06:40:19.295 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T06:40:19.295 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-10T06:40:19.323 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T06:40:19.323 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-10T06:40:19.349 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T06:40:19.349 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-10T06:40:19.375 INFO:teuthology.task.install.util:Shipping 'daemon-helper'... 2026-03-10T06:40:19.375 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T06:40:19.375 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/usr/bin/daemon-helper 2026-03-10T06:40:19.398 DEBUG:teuthology.orchestra.run.vm01:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-10T06:40:19.461 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T06:40:19.461 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/usr/bin/daemon-helper 2026-03-10T06:40:19.484 DEBUG:teuthology.orchestra.run.vm08:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-10T06:40:19.548 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T06:40:19.548 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/usr/bin/daemon-helper 2026-03-10T06:40:19.571 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-10T06:40:19.636 INFO:teuthology.task.install.util:Shipping 'adjust-ulimits'... 2026-03-10T06:40:19.636 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T06:40:19.636 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-10T06:40:19.663 DEBUG:teuthology.orchestra.run.vm01:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-10T06:40:19.728 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T06:40:19.728 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-10T06:40:19.752 DEBUG:teuthology.orchestra.run.vm08:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-10T06:40:19.816 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T06:40:19.816 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-10T06:40:19.839 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-10T06:40:19.906 INFO:teuthology.task.install.util:Shipping 'stdin-killer'... 2026-03-10T06:40:19.906 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T06:40:19.906 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/usr/bin/stdin-killer 2026-03-10T06:40:19.930 DEBUG:teuthology.orchestra.run.vm01:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-10T06:40:19.993 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T06:40:19.993 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/usr/bin/stdin-killer 2026-03-10T06:40:20.020 DEBUG:teuthology.orchestra.run.vm08:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-10T06:40:20.085 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T06:40:20.085 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/usr/bin/stdin-killer 2026-03-10T06:40:20.110 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-10T06:40:20.175 INFO:teuthology.run_tasks:Running task cephadm... 2026-03-10T06:40:20.215 INFO:tasks.cephadm:Config: {'conf': {'global': {'mon election default strategy': 1}, 'mgr': {'debug mgr': 20, 'debug ms': 1, 'mgr/cephadm/use_agent': False}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20}, 'osd': {'debug ms': 1, 'debug osd': 20, 'osd mclock iops capacity threshold hdd': 49000}}, 'flavor': 'default', 'log-ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', 'MON_DOWN', 'POOL_APP_NOT_ENABLED', 'mon down', 'mons down', 'out of quorum', 'CEPHADM_FAILED_DAEMON'], 'log-only-match': ['CEPHADM_'], 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'} 2026-03-10T06:40:20.215 INFO:tasks.cephadm:Cluster image is quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T06:40:20.216 INFO:tasks.cephadm:Cluster fsid is 0204d884-1c4c-11f1-a017-9957fb527c0e 2026-03-10T06:40:20.216 INFO:tasks.cephadm:Choosing monitor IPs and ports... 2026-03-10T06:40:20.216 INFO:tasks.cephadm:Monitor IPs: {'mon.a': '192.168.123.101', 'mon.b': '192.168.123.108', 'mon.c': '192.168.123.109'} 2026-03-10T06:40:20.216 INFO:tasks.cephadm:First mon is mon.a on vm01 2026-03-10T06:40:20.216 INFO:tasks.cephadm:First mgr is a 2026-03-10T06:40:20.216 INFO:tasks.cephadm:Normalizing hostnames... 2026-03-10T06:40:20.216 DEBUG:teuthology.orchestra.run.vm01:> sudo hostname $(hostname -s) 2026-03-10T06:40:20.239 DEBUG:teuthology.orchestra.run.vm08:> sudo hostname $(hostname -s) 2026-03-10T06:40:20.263 DEBUG:teuthology.orchestra.run.vm09:> sudo hostname $(hostname -s) 2026-03-10T06:40:20.286 INFO:tasks.cephadm:Downloading "compiled" cephadm from cachra 2026-03-10T06:40:20.286 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T06:40:20.916 INFO:tasks.cephadm:builder_project result: [{'url': 'https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/', 'chacra_url': 'https://3.chacra.ceph.com/repos/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/', 'ref': 'squid', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'distro': 'centos', 'distro_version': '9', 'distro_codename': None, 'modified': '2026-02-25 18:55:15.146628', 'status': 'ready', 'flavor': 'default', 'project': 'ceph', 'archs': ['source', 'x86_64'], 'extra': {'version': '19.2.3-678-ge911bdeb', 'package_manager_version': '19.2.3-678.ge911bdeb', 'build_url': 'https://jenkins.ceph.com/job/ceph-dev-pipeline/3275/', 'root_build_cause': '', 'node_name': '10.20.192.26+soko16', 'job_name': 'ceph-dev-pipeline'}}] 2026-03-10T06:40:21.497 INFO:tasks.util.chacra:got chacra host 3.chacra.ceph.com, ref squid, sha1 e911bdebe5c8faa3800735d1568fcdca65db60df from https://shaman.ceph.com/api/search/?project=ceph&distros=centos%2F9%2Fx86_64&flavor=default&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T06:40:21.498 INFO:tasks.cephadm:Discovered cachra url: https://3.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/x86_64/flavors/default/cephadm 2026-03-10T06:40:21.498 INFO:tasks.cephadm:Downloading cephadm from url: https://3.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/x86_64/flavors/default/cephadm 2026-03-10T06:40:21.498 DEBUG:teuthology.orchestra.run.vm01:> curl --silent -L https://3.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-10T06:40:23.330 INFO:teuthology.orchestra.run.vm01.stdout:-rw-r--r--. 1 ubuntu ubuntu 788355 Mar 10 06:40 /home/ubuntu/cephtest/cephadm 2026-03-10T06:40:23.330 DEBUG:teuthology.orchestra.run.vm08:> curl --silent -L https://3.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-10T06:40:24.668 INFO:teuthology.orchestra.run.vm08.stdout:-rw-r--r--. 1 ubuntu ubuntu 788355 Mar 10 06:40 /home/ubuntu/cephtest/cephadm 2026-03-10T06:40:24.668 DEBUG:teuthology.orchestra.run.vm09:> curl --silent -L https://3.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-10T06:40:26.010 INFO:teuthology.orchestra.run.vm09.stdout:-rw-r--r--. 1 ubuntu ubuntu 788355 Mar 10 06:40 /home/ubuntu/cephtest/cephadm 2026-03-10T06:40:26.010 DEBUG:teuthology.orchestra.run.vm01:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-10T06:40:26.025 DEBUG:teuthology.orchestra.run.vm08:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-10T06:40:26.039 DEBUG:teuthology.orchestra.run.vm09:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-10T06:40:26.058 INFO:tasks.cephadm:Pulling image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df on all hosts... 2026-03-10T06:40:26.058 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-10T06:40:26.067 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-10T06:40:26.081 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-10T06:40:26.215 INFO:teuthology.orchestra.run.vm01.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T06:40:26.228 INFO:teuthology.orchestra.run.vm08.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T06:40:26.253 INFO:teuthology.orchestra.run.vm09.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T06:41:22.173 INFO:teuthology.orchestra.run.vm01.stdout:{ 2026-03-10T06:41:22.174 INFO:teuthology.orchestra.run.vm01.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-10T06:41:22.174 INFO:teuthology.orchestra.run.vm01.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-10T06:41:22.174 INFO:teuthology.orchestra.run.vm01.stdout: "repo_digests": [ 2026-03-10T06:41:22.174 INFO:teuthology.orchestra.run.vm01.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-10T06:41:22.174 INFO:teuthology.orchestra.run.vm01.stdout: ] 2026-03-10T06:41:22.174 INFO:teuthology.orchestra.run.vm01.stdout:} 2026-03-10T06:41:28.386 INFO:teuthology.orchestra.run.vm09.stdout:{ 2026-03-10T06:41:28.387 INFO:teuthology.orchestra.run.vm09.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-10T06:41:28.387 INFO:teuthology.orchestra.run.vm09.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-10T06:41:28.387 INFO:teuthology.orchestra.run.vm09.stdout: "repo_digests": [ 2026-03-10T06:41:28.387 INFO:teuthology.orchestra.run.vm09.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-10T06:41:28.387 INFO:teuthology.orchestra.run.vm09.stdout: ] 2026-03-10T06:41:28.387 INFO:teuthology.orchestra.run.vm09.stdout:} 2026-03-10T06:41:28.642 INFO:teuthology.orchestra.run.vm08.stdout:{ 2026-03-10T06:41:28.642 INFO:teuthology.orchestra.run.vm08.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-10T06:41:28.642 INFO:teuthology.orchestra.run.vm08.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-10T06:41:28.642 INFO:teuthology.orchestra.run.vm08.stdout: "repo_digests": [ 2026-03-10T06:41:28.642 INFO:teuthology.orchestra.run.vm08.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-10T06:41:28.642 INFO:teuthology.orchestra.run.vm08.stdout: ] 2026-03-10T06:41:28.642 INFO:teuthology.orchestra.run.vm08.stdout:} 2026-03-10T06:41:28.657 DEBUG:teuthology.orchestra.run.vm01:> sudo mkdir -p /etc/ceph 2026-03-10T06:41:28.680 DEBUG:teuthology.orchestra.run.vm08:> sudo mkdir -p /etc/ceph 2026-03-10T06:41:28.705 DEBUG:teuthology.orchestra.run.vm09:> sudo mkdir -p /etc/ceph 2026-03-10T06:41:28.730 DEBUG:teuthology.orchestra.run.vm01:> sudo chmod 777 /etc/ceph 2026-03-10T06:41:28.753 DEBUG:teuthology.orchestra.run.vm08:> sudo chmod 777 /etc/ceph 2026-03-10T06:41:28.775 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod 777 /etc/ceph 2026-03-10T06:41:28.797 INFO:tasks.cephadm:Writing seed config... 2026-03-10T06:41:28.798 INFO:tasks.cephadm: override: [global] mon election default strategy = 1 2026-03-10T06:41:28.798 INFO:tasks.cephadm: override: [mgr] debug mgr = 20 2026-03-10T06:41:28.798 INFO:tasks.cephadm: override: [mgr] debug ms = 1 2026-03-10T06:41:28.798 INFO:tasks.cephadm: override: [mgr] mgr/cephadm/use_agent = False 2026-03-10T06:41:28.798 INFO:tasks.cephadm: override: [mon] debug mon = 20 2026-03-10T06:41:28.798 INFO:tasks.cephadm: override: [mon] debug ms = 1 2026-03-10T06:41:28.798 INFO:tasks.cephadm: override: [mon] debug paxos = 20 2026-03-10T06:41:28.798 INFO:tasks.cephadm: override: [osd] debug ms = 1 2026-03-10T06:41:28.798 INFO:tasks.cephadm: override: [osd] debug osd = 20 2026-03-10T06:41:28.798 INFO:tasks.cephadm: override: [osd] osd mclock iops capacity threshold hdd = 49000 2026-03-10T06:41:28.798 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T06:41:28.798 DEBUG:teuthology.orchestra.run.vm01:> dd of=/home/ubuntu/cephtest/seed.ceph.conf 2026-03-10T06:41:28.812 DEBUG:tasks.cephadm:Final config: [global] # make logging friendly to teuthology log_to_file = true log_to_stderr = false log to journald = false mon cluster log to file = true mon cluster log file level = debug mon clock drift allowed = 1.000 # replicate across OSDs, not hosts osd crush chooseleaf type = 0 #osd pool default size = 2 osd pool default erasure code profile = plugin=jerasure technique=reed_sol_van k=2 m=1 crush-failure-domain=osd # enable some debugging auth debug = true ms die on old message = true ms die on bug = true debug asserts on shutdown = true # adjust warnings mon max pg per osd = 10000# >= luminous mon pg warn max object skew = 0 mon osd allow primary affinity = true mon osd allow pg remap = true mon warn on legacy crush tunables = false mon warn on crush straw calc version zero = false mon warn on no sortbitwise = false mon warn on osd down out interval zero = false mon warn on too few osds = false mon_warn_on_pool_pg_num_not_power_of_two = false # disable pg_autoscaler by default for new pools osd_pool_default_pg_autoscale_mode = off # tests delete pools mon allow pool delete = true fsid = 0204d884-1c4c-11f1-a017-9957fb527c0e mon election default strategy = 1 [osd] osd scrub load threshold = 5.0 osd scrub max interval = 600 osd mclock profile = high_recovery_ops osd recover clone overlap = true osd recovery max chunk = 1048576 osd deep scrub update digest min age = 30 osd map max advance = 10 osd memory target autotune = true # debugging osd debug shutdown = true osd debug op order = true osd debug verify stray on activate = true osd debug pg log writeout = true osd debug verify cached snaps = true osd debug verify missing on start = true osd debug misdirected ops = true osd op queue = debug_random osd op queue cut off = debug_random osd shutdown pgref assert = true bdev debug aio = true osd sloppy crc = true debug ms = 1 debug osd = 20 osd mclock iops capacity threshold hdd = 49000 [mgr] mon reweight min pgs per osd = 4 mon reweight min bytes per osd = 10 mgr/telemetry/nag = false debug mgr = 20 debug ms = 1 mgr/cephadm/use_agent = False [mon] mon data avail warn = 5 mon mgr mkfs grace = 240 mon reweight min pgs per osd = 4 mon osd reporter subtree level = osd mon osd prime pg temp = true mon reweight min bytes per osd = 10 # rotate auth tickets quickly to exercise renewal paths auth mon ticket ttl = 660# 11m auth service ticket ttl = 240# 4m # don't complain about global id reclaim mon_warn_on_insecure_global_id_reclaim = false mon_warn_on_insecure_global_id_reclaim_allowed = false debug mon = 20 debug ms = 1 debug paxos = 20 [client.rgw] rgw cache enabled = true rgw enable ops log = true rgw enable usage log = true 2026-03-10T06:41:28.812 DEBUG:teuthology.orchestra.run.vm01:mon.a> sudo journalctl -f -n 0 -u ceph-0204d884-1c4c-11f1-a017-9957fb527c0e@mon.a.service 2026-03-10T06:41:28.854 DEBUG:teuthology.orchestra.run.vm01:mgr.a> sudo journalctl -f -n 0 -u ceph-0204d884-1c4c-11f1-a017-9957fb527c0e@mgr.a.service 2026-03-10T06:41:28.896 INFO:tasks.cephadm:Bootstrapping... 2026-03-10T06:41:28.896 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df -v bootstrap --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id a --orphan-initial-daemons --skip-monitoring-stack --mon-ip 192.168.123.101 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring 2026-03-10T06:41:29.034 INFO:teuthology.orchestra.run.vm01.stdout:-------------------------------------------------------------------------------- 2026-03-10T06:41:29.034 INFO:teuthology.orchestra.run.vm01.stdout:cephadm ['--image', 'quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df', '-v', 'bootstrap', '--fsid', '0204d884-1c4c-11f1-a017-9957fb527c0e', '--config', '/home/ubuntu/cephtest/seed.ceph.conf', '--output-config', '/etc/ceph/ceph.conf', '--output-keyring', '/etc/ceph/ceph.client.admin.keyring', '--output-pub-ssh-key', '/home/ubuntu/cephtest/ceph.pub', '--mon-id', 'a', '--mgr-id', 'a', '--orphan-initial-daemons', '--skip-monitoring-stack', '--mon-ip', '192.168.123.101', '--skip-admin-label'] 2026-03-10T06:41:29.034 INFO:teuthology.orchestra.run.vm01.stderr:Specifying an fsid for your cluster offers no advantages and may increase the likelihood of fsid conflicts. 2026-03-10T06:41:29.034 INFO:teuthology.orchestra.run.vm01.stdout:Verifying podman|docker is present... 2026-03-10T06:41:29.053 INFO:teuthology.orchestra.run.vm01.stdout:/bin/podman: stdout 5.8.0 2026-03-10T06:41:29.053 INFO:teuthology.orchestra.run.vm01.stdout:Verifying lvm2 is present... 2026-03-10T06:41:29.053 INFO:teuthology.orchestra.run.vm01.stdout:Verifying time synchronization is in place... 2026-03-10T06:41:29.059 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-10T06:41:29.059 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-10T06:41:29.064 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-10T06:41:29.064 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout inactive 2026-03-10T06:41:29.069 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout enabled 2026-03-10T06:41:29.074 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout active 2026-03-10T06:41:29.074 INFO:teuthology.orchestra.run.vm01.stdout:Unit chronyd.service is enabled and running 2026-03-10T06:41:29.074 INFO:teuthology.orchestra.run.vm01.stdout:Repeating the final host check... 2026-03-10T06:41:29.092 INFO:teuthology.orchestra.run.vm01.stdout:/bin/podman: stdout 5.8.0 2026-03-10T06:41:29.092 INFO:teuthology.orchestra.run.vm01.stdout:podman (/bin/podman) version 5.8.0 is present 2026-03-10T06:41:29.092 INFO:teuthology.orchestra.run.vm01.stdout:systemctl is present 2026-03-10T06:41:29.092 INFO:teuthology.orchestra.run.vm01.stdout:lvcreate is present 2026-03-10T06:41:29.098 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-10T06:41:29.098 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-10T06:41:29.102 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-10T06:41:29.102 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout inactive 2026-03-10T06:41:29.107 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout enabled 2026-03-10T06:41:29.112 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout active 2026-03-10T06:41:29.112 INFO:teuthology.orchestra.run.vm01.stdout:Unit chronyd.service is enabled and running 2026-03-10T06:41:29.112 INFO:teuthology.orchestra.run.vm01.stdout:Host looks OK 2026-03-10T06:41:29.112 INFO:teuthology.orchestra.run.vm01.stdout:Cluster fsid: 0204d884-1c4c-11f1-a017-9957fb527c0e 2026-03-10T06:41:29.112 INFO:teuthology.orchestra.run.vm01.stdout:Acquiring lock 140192748874912 on /run/cephadm/0204d884-1c4c-11f1-a017-9957fb527c0e.lock 2026-03-10T06:41:29.112 INFO:teuthology.orchestra.run.vm01.stdout:Lock 140192748874912 acquired on /run/cephadm/0204d884-1c4c-11f1-a017-9957fb527c0e.lock 2026-03-10T06:41:29.112 INFO:teuthology.orchestra.run.vm01.stdout:Verifying IP 192.168.123.101 port 3300 ... 2026-03-10T06:41:29.113 INFO:teuthology.orchestra.run.vm01.stdout:Verifying IP 192.168.123.101 port 6789 ... 2026-03-10T06:41:29.113 INFO:teuthology.orchestra.run.vm01.stdout:Base mon IP(s) is [192.168.123.101:3300, 192.168.123.101:6789], mon addrv is [v2:192.168.123.101:3300,v1:192.168.123.101:6789] 2026-03-10T06:41:29.115 INFO:teuthology.orchestra.run.vm01.stdout:/sbin/ip: stdout default via 192.168.123.1 dev eth0 proto dhcp src 192.168.123.101 metric 100 2026-03-10T06:41:29.115 INFO:teuthology.orchestra.run.vm01.stdout:/sbin/ip: stdout 192.168.123.0/24 dev eth0 proto kernel scope link src 192.168.123.101 metric 100 2026-03-10T06:41:29.117 INFO:teuthology.orchestra.run.vm01.stdout:/sbin/ip: stdout ::1 dev lo proto kernel metric 256 pref medium 2026-03-10T06:41:29.117 INFO:teuthology.orchestra.run.vm01.stdout:/sbin/ip: stdout fe80::/64 dev eth0 proto kernel metric 1024 pref medium 2026-03-10T06:41:29.120 INFO:teuthology.orchestra.run.vm01.stdout:/sbin/ip: stdout 1: lo: mtu 65536 state UNKNOWN qlen 1000 2026-03-10T06:41:29.120 INFO:teuthology.orchestra.run.vm01.stdout:/sbin/ip: stdout inet6 ::1/128 scope host 2026-03-10T06:41:29.120 INFO:teuthology.orchestra.run.vm01.stdout:/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-10T06:41:29.120 INFO:teuthology.orchestra.run.vm01.stdout:/sbin/ip: stdout 2: eth0: mtu 1500 state UP qlen 1000 2026-03-10T06:41:29.120 INFO:teuthology.orchestra.run.vm01.stdout:/sbin/ip: stdout inet6 fe80::5055:ff:fe00:1/64 scope link noprefixroute 2026-03-10T06:41:29.120 INFO:teuthology.orchestra.run.vm01.stdout:/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-10T06:41:29.120 INFO:teuthology.orchestra.run.vm01.stdout:Mon IP `192.168.123.101` is in CIDR network `192.168.123.0/24` 2026-03-10T06:41:29.120 INFO:teuthology.orchestra.run.vm01.stdout:Mon IP `192.168.123.101` is in CIDR network `192.168.123.0/24` 2026-03-10T06:41:29.120 INFO:teuthology.orchestra.run.vm01.stdout:Inferred mon public CIDR from local network configuration ['192.168.123.0/24', '192.168.123.0/24'] 2026-03-10T06:41:29.121 INFO:teuthology.orchestra.run.vm01.stdout:Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network 2026-03-10T06:41:29.121 INFO:teuthology.orchestra.run.vm01.stdout:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T06:41:30.347 INFO:teuthology.orchestra.run.vm01.stdout:/bin/podman: stdout 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c 2026-03-10T06:41:30.347 INFO:teuthology.orchestra.run.vm01.stdout:/bin/podman: stderr Trying to pull quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T06:41:30.347 INFO:teuthology.orchestra.run.vm01.stdout:/bin/podman: stderr Getting image source signatures 2026-03-10T06:41:30.347 INFO:teuthology.orchestra.run.vm01.stdout:/bin/podman: stderr Copying blob sha256:1752b8d01aa0dd33bbe0ab24e8316174c94fbdcd5d26252e2680bba0624747a7 2026-03-10T06:41:30.347 INFO:teuthology.orchestra.run.vm01.stdout:/bin/podman: stderr Copying blob sha256:8e380faede39ebd4286247457b408d979ab568aafd8389c42ec304b8cfba4e92 2026-03-10T06:41:30.347 INFO:teuthology.orchestra.run.vm01.stdout:/bin/podman: stderr Copying config sha256:654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c 2026-03-10T06:41:30.347 INFO:teuthology.orchestra.run.vm01.stdout:/bin/podman: stderr Writing manifest to image destination 2026-03-10T06:41:30.487 INFO:teuthology.orchestra.run.vm01.stdout:ceph: stdout ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-10T06:41:30.487 INFO:teuthology.orchestra.run.vm01.stdout:Ceph version: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-10T06:41:30.487 INFO:teuthology.orchestra.run.vm01.stdout:Extracting ceph user uid/gid from container image... 2026-03-10T06:41:30.603 INFO:teuthology.orchestra.run.vm01.stdout:stat: stdout 167 167 2026-03-10T06:41:30.604 INFO:teuthology.orchestra.run.vm01.stdout:Creating initial keys... 2026-03-10T06:41:30.701 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-authtool: stdout AQAava9pFtjnKBAAaK2dnJIUVSFnZ2ptkt/CPg== 2026-03-10T06:41:30.797 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-authtool: stdout AQAava9pI7JzLhAACKQaIz0mGPAW3BpVaInRcA== 2026-03-10T06:41:30.929 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-authtool: stdout AQAava9ptuhLNRAAZC7+GSN1sxoiAqcvS80uYg== 2026-03-10T06:41:30.929 INFO:teuthology.orchestra.run.vm01.stdout:Creating initial monmap... 2026-03-10T06:41:31.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-10T06:41:31.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/monmaptool: stdout setting min_mon_release = quincy 2026-03-10T06:41:31.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: set fsid to 0204d884-1c4c-11f1-a017-9957fb527c0e 2026-03-10T06:41:31.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-10T06:41:31.014 INFO:teuthology.orchestra.run.vm01.stdout:monmaptool for a [v2:192.168.123.101:3300,v1:192.168.123.101:6789] on /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-10T06:41:31.014 INFO:teuthology.orchestra.run.vm01.stdout:setting min_mon_release = quincy 2026-03-10T06:41:31.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/monmaptool: set fsid to 0204d884-1c4c-11f1-a017-9957fb527c0e 2026-03-10T06:41:31.014 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-10T06:41:31.014 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:41:31.014 INFO:teuthology.orchestra.run.vm01.stdout:Creating mon... 2026-03-10T06:41:31.116 INFO:teuthology.orchestra.run.vm01.stdout:create mon.a on 2026-03-10T06:41:31.267 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Removed "/etc/systemd/system/multi-user.target.wants/ceph.target". 2026-03-10T06:41:31.377 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /etc/systemd/system/ceph.target. 2026-03-10T06:41:31.505 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph-0204d884-1c4c-11f1-a017-9957fb527c0e.target → /etc/systemd/system/ceph-0204d884-1c4c-11f1-a017-9957fb527c0e.target. 2026-03-10T06:41:31.505 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph.target.wants/ceph-0204d884-1c4c-11f1-a017-9957fb527c0e.target → /etc/systemd/system/ceph-0204d884-1c4c-11f1-a017-9957fb527c0e.target. 2026-03-10T06:41:31.652 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-0204d884-1c4c-11f1-a017-9957fb527c0e@mon.a 2026-03-10T06:41:31.653 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Failed to reset failed state of unit ceph-0204d884-1c4c-11f1-a017-9957fb527c0e@mon.a.service: Unit ceph-0204d884-1c4c-11f1-a017-9957fb527c0e@mon.a.service not loaded. 2026-03-10T06:41:31.782 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-0204d884-1c4c-11f1-a017-9957fb527c0e.target.wants/ceph-0204d884-1c4c-11f1-a017-9957fb527c0e@mon.a.service → /etc/systemd/system/ceph-0204d884-1c4c-11f1-a017-9957fb527c0e@.service. 2026-03-10T06:41:32.097 INFO:teuthology.orchestra.run.vm01.stdout:firewalld does not appear to be present 2026-03-10T06:41:32.097 INFO:teuthology.orchestra.run.vm01.stdout:Not possible to enable service . firewalld.service is not available 2026-03-10T06:41:32.098 INFO:teuthology.orchestra.run.vm01.stdout:Waiting for mon to start... 2026-03-10T06:41:32.098 INFO:teuthology.orchestra.run.vm01.stdout:Waiting for mon... 2026-03-10T06:41:32.307 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout cluster: 2026-03-10T06:41:32.307 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout id: 0204d884-1c4c-11f1-a017-9957fb527c0e 2026-03-10T06:41:32.307 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout health: HEALTH_OK 2026-03-10T06:41:32.308 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-10T06:41:32.308 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout services: 2026-03-10T06:41:32.308 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon: 1 daemons, quorum a (age 0.136613s) 2026-03-10T06:41:32.308 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mgr: no daemons active 2026-03-10T06:41:32.308 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout osd: 0 osds: 0 up, 0 in 2026-03-10T06:41:32.308 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-10T06:41:32.308 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout data: 2026-03-10T06:41:32.308 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout pools: 0 pools, 0 pgs 2026-03-10T06:41:32.308 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout objects: 0 objects, 0 B 2026-03-10T06:41:32.308 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout usage: 0 B used, 0 B / 0 B avail 2026-03-10T06:41:32.308 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout pgs: 2026-03-10T06:41:32.308 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-10T06:41:32.308 INFO:teuthology.orchestra.run.vm01.stdout:mon is available 2026-03-10T06:41:32.308 INFO:teuthology.orchestra.run.vm01.stdout:Assimilating anything we can from ceph.conf... 2026-03-10T06:41:32.493 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-10T06:41:32.494 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout [global] 2026-03-10T06:41:32.494 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout fsid = 0204d884-1c4c-11f1-a017-9957fb527c0e 2026-03-10T06:41:32.494 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-10T06:41:32.494 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.101:3300,v1:192.168.123.101:6789] 2026-03-10T06:41:32.494 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-10T06:41:32.494 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-10T06:41:32.494 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-10T06:41:32.494 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-10T06:41:32.494 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-10T06:41:32.494 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-10T06:41:32.494 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mgr/cephadm/use_agent = False 2026-03-10T06:41:32.494 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-10T06:41:32.494 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-10T06:41:32.494 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout [osd] 2026-03-10T06:41:32.494 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-10T06:41:32.494 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-10T06:41:32.494 INFO:teuthology.orchestra.run.vm01.stdout:Generating new minimal ceph.conf... 2026-03-10T06:41:32.687 INFO:teuthology.orchestra.run.vm01.stdout:Restarting the monitor... 2026-03-10T06:41:32.835 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:32 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mon-a[50231]: 2026-03-10T06:41:32.759+0000 7f2043e34640 -1 mon.a@0(leader) e1 *** Got Signal Terminated *** 2026-03-10T06:41:33.036 INFO:teuthology.orchestra.run.vm01.stdout:Setting public_network to 192.168.123.0/24 in mon config section 2026-03-10T06:41:33.100 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:32 vm01 podman[50440]: 2026-03-10 06:41:32.832254681 +0000 UTC m=+0.084559903 container died 3d4f4dec98b18fe8c9d5ea32aea3ab798034f24f123b7b4d47795c0e0b822b7f (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mon-a, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , ceph=True, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid) 2026-03-10T06:41:33.100 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:32 vm01 podman[50440]: 2026-03-10 06:41:32.848453289 +0000 UTC m=+0.100758511 container remove 3d4f4dec98b18fe8c9d5ea32aea3ab798034f24f123b7b4d47795c0e0b822b7f (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mon-a, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-10T06:41:33.100 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:32 vm01 bash[50440]: ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mon-a 2026-03-10T06:41:33.100 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:32 vm01 systemd[1]: ceph-0204d884-1c4c-11f1-a017-9957fb527c0e@mon.a.service: Deactivated successfully. 2026-03-10T06:41:33.100 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:32 vm01 systemd[1]: Stopped Ceph mon.a for 0204d884-1c4c-11f1-a017-9957fb527c0e. 2026-03-10T06:41:33.100 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:32 vm01 systemd[1]: Starting Ceph mon.a for 0204d884-1c4c-11f1-a017-9957fb527c0e... 2026-03-10T06:41:33.100 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:32 vm01 podman[50510]: 2026-03-10 06:41:32.988150872 +0000 UTC m=+0.014957448 container create c842826e15e7daccef83025e06ea751b67d37d5679d909c2685d1f4a0513accd (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mon-a, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, org.label-schema.vendor=CentOS) 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 podman[50510]: 2026-03-10 06:41:33.026288043 +0000 UTC m=+0.053094619 container init c842826e15e7daccef83025e06ea751b67d37d5679d909c2685d1f4a0513accd (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mon-a, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default) 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 podman[50510]: 2026-03-10 06:41:33.028397041 +0000 UTC m=+0.055203617 container start c842826e15e7daccef83025e06ea751b67d37d5679d909c2685d1f4a0513accd (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mon-a, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default) 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 bash[50510]: c842826e15e7daccef83025e06ea751b67d37d5679d909c2685d1f4a0513accd 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 podman[50510]: 2026-03-10 06:41:32.981964643 +0000 UTC m=+0.008771229 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 systemd[1]: Started Ceph mon.a for 0204d884-1c4c-11f1-a017-9957fb527c0e. 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: set uid:gid to 167:167 (ceph:ceph) 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 2 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: pidfile_write: ignore empty --pid-file 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: load: jerasure load: lrc 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: RocksDB version: 7.9.2 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Git sha 0 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Compile date 2026-02-25 18:11:04 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: DB SUMMARY 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: DB Session ID: W2DCBCPV4SBZOARDYIL9 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: CURRENT file: CURRENT 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: IDENTITY file: IDENTITY 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: MANIFEST file: MANIFEST-000010 size: 179 Bytes 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 1, files: 000008.sst 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 000009.log size: 75535 ; 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.error_if_exists: 0 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.create_if_missing: 0 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.paranoid_checks: 1 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.flush_verify_memtable_count: 1 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.env: 0x55763ac0adc0 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.fs: PosixFileSystem 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.info_log: 0x55763d1e8700 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.max_file_opening_threads: 16 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.statistics: (nil) 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.use_fsync: 0 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.max_log_file_size: 0 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.keep_log_file_num: 1000 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.recycle_log_file_num: 0 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.allow_fallocate: 1 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.allow_mmap_reads: 0 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.allow_mmap_writes: 0 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.use_direct_reads: 0 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.create_missing_column_families: 0 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.db_log_dir: 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.wal_dir: 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.advise_random_on_open: 1 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.db_write_buffer_size: 0 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.write_buffer_manager: 0x55763d1ed900 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.rate_limiter: (nil) 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T06:41:33.101 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.wal_recovery_mode: 2 2026-03-10T06:41:33.102 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.enable_thread_tracking: 0 2026-03-10T06:41:33.102 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.enable_pipelined_write: 0 2026-03-10T06:41:33.102 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.unordered_write: 0 2026-03-10T06:41:33.102 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T06:41:33.102 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T06:41:33.102 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T06:41:33.102 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T06:41:33.102 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.row_cache: None 2026-03-10T06:41:33.102 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.wal_filter: None 2026-03-10T06:41:33.102 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T06:41:33.102 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.allow_ingest_behind: 0 2026-03-10T06:41:33.102 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.two_write_queues: 0 2026-03-10T06:41:33.102 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.manual_wal_flush: 0 2026-03-10T06:41:33.102 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.wal_compression: 0 2026-03-10T06:41:33.102 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.atomic_flush: 0 2026-03-10T06:41:33.102 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T06:41:33.102 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T06:41:33.102 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T06:41:33.102 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.log_readahead_size: 0 2026-03-10T06:41:33.102 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T06:41:33.102 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.best_efforts_recovery: 0 2026-03-10T06:41:33.102 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T06:41:33.102 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T06:41:33.102 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.allow_data_in_errors: 0 2026-03-10T06:41:33.102 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.db_host_id: __hostname__ 2026-03-10T06:41:33.102 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.enforce_single_del_contracts: true 2026-03-10T06:41:33.102 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.max_background_jobs: 2 2026-03-10T06:41:33.102 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.max_background_compactions: -1 2026-03-10T06:41:33.102 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.max_subcompactions: 1 2026-03-10T06:41:33.102 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T06:41:33.102 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T06:41:33.102 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T06:41:33.102 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.max_total_wal_size: 0 2026-03-10T06:41:33.102 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T06:41:33.102 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T06:41:33.102 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T06:41:33.102 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T06:41:33.102 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.max_open_files: -1 2026-03-10T06:41:33.102 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.bytes_per_sync: 0 2026-03-10T06:41:33.102 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T06:41:33.102 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T06:41:33.102 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.compaction_readahead_size: 0 2026-03-10T06:41:33.102 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.max_background_flushes: -1 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Compression algorithms supported: 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: kZSTD supported: 0 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: kXpressCompression supported: 0 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: kBZip2Compression supported: 0 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: kLZ4Compression supported: 1 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: kZlibCompression supported: 1 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: kLZ4HCCompression supported: 1 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: kSnappyCompression supported: 1 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: DMutex implementation: pthread_mutex_t 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.merge_operator: 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.compaction_filter: None 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.compaction_filter_factory: None 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.sst_partitioner_factory: None 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55763d1e8640) 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout: cache_index_and_filter_blocks: 1 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout: pin_top_level_index_and_filter: 1 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout: index_type: 0 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout: data_block_index_type: 0 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout: index_shortening: 1 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout: data_block_hash_table_util_ratio: 0.750000 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout: checksum: 4 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout: no_block_cache: 0 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout: block_cache: 0x55763d20d350 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout: block_cache_name: BinnedLRUCache 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout: block_cache_options: 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout: capacity : 536870912 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout: num_shard_bits : 4 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout: strict_capacity_limit : 0 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout: high_pri_pool_ratio: 0.000 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout: block_cache_compressed: (nil) 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout: persistent_cache: (nil) 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout: block_size: 4096 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout: block_size_deviation: 10 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout: block_restart_interval: 16 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout: index_block_restart_interval: 1 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout: metadata_block_size: 4096 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout: partition_filters: 0 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout: use_delta_encoding: 1 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout: filter_policy: bloomfilter 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout: whole_key_filtering: 1 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout: verify_compression: 0 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout: read_amp_bytes_per_bit: 0 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout: format_version: 5 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout: enable_index_compression: 1 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout: block_align: 0 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout: max_auto_readahead_size: 262144 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout: prepopulate_block_cache: 0 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout: initial_auto_readahead_size: 8192 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout: num_file_reads_for_auto_readahead: 2 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.write_buffer_size: 33554432 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.max_write_buffer_number: 2 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.compression: NoCompression 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.bottommost_compression: Disabled 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.prefix_extractor: nullptr 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.num_levels: 7 2026-03-10T06:41:33.103 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.compression_opts.level: 32767 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.compression_opts.strategy: 0 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.compression_opts.enabled: false 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.target_file_size_base: 67108864 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.arena_block_size: 1048576 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.disable_auto_compactions: 0 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.inplace_update_support: 0 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.bloom_locality: 0 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.max_successive_merges: 0 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.paranoid_file_checks: 0 2026-03-10T06:41:33.104 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.force_consistency_checks: 1 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.report_bg_io_stats: 0 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.ttl: 2592000 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.enable_blob_files: false 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.min_blob_size: 0 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.blob_file_size: 268435456 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.blob_file_starting_level: 0 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: bbcefac5-a17b-48cc-ae31-7609d43efd6b 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773124893058891, "job": 1, "event": "recovery_started", "wal_files": [9]} 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773124893060968, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 72616, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 225, "table_properties": {"data_size": 70895, "index_size": 174, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 517, "raw_key_size": 9705, "raw_average_key_size": 49, "raw_value_size": 65374, "raw_average_value_size": 333, "num_data_blocks": 8, "num_entries": 196, "num_filter_entries": 196, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773124893, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bbcefac5-a17b-48cc-ae31-7609d43efd6b", "db_session_id": "W2DCBCPV4SBZOARDYIL9", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}} 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773124893063016, "job": 1, "event": "recovery_finished"} 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: [db/version_set.cc:5047] Creating manifest 15 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-a/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55763d20ee00 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: DB pointer 0x55763d324000 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout: ** DB Stats ** 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout: 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout: ** Compaction Stats [default] ** 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout: L0 2/0 72.77 KB 0.5 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 37.0 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout: Sum 2/0 72.77 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 37.0 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 37.0 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout: 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout: ** Compaction Stats [default] ** 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 37.0 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout: 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout: 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout: Flush(GB): cumulative 0.000, interval 0.000 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout: AddFile(Total Files): cumulative 0, interval 0 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout: AddFile(L0 Files): cumulative 0, interval 0 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout: AddFile(Keys): cumulative 0, interval 0 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout: Cumulative compaction: 0.00 GB write, 3.97 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout: Interval compaction: 0.00 GB write, 3.97 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout: Block cache BinnedLRUCache@0x55763d20d350#2 capacity: 512.00 MB usage: 6.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 9e-06 secs_since: 0 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout: Block cache entry stats(count,size,portion): DataBlock(2,5.03 KB,0.000959635%) FilterBlock(2,0.70 KB,0.00013411%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%) 2026-03-10T06:41:33.105 INFO:journalctl@ceph.mon.a.vm01.stdout: 2026-03-10T06:41:33.106 INFO:journalctl@ceph.mon.a.vm01.stdout: ** File Read Latency Histogram By Level [default] ** 2026-03-10T06:41:33.106 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: starting mon.a rank 0 at public addrs [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] at bind addrs [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon_data /var/lib/ceph/mon/ceph-a fsid 0204d884-1c4c-11f1-a017-9957fb527c0e 2026-03-10T06:41:33.106 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: mon.a@-1(???) e1 preinit fsid 0204d884-1c4c-11f1-a017-9957fb527c0e 2026-03-10T06:41:33.106 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: mon.a@-1(???).mds e1 new map 2026-03-10T06:41:33.106 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: mon.a@-1(???).mds e1 print_map 2026-03-10T06:41:33.106 INFO:journalctl@ceph.mon.a.vm01.stdout: e1 2026-03-10T06:41:33.106 INFO:journalctl@ceph.mon.a.vm01.stdout: btime 2026-03-10T06:41:32:128356+0000 2026-03-10T06:41:33.106 INFO:journalctl@ceph.mon.a.vm01.stdout: enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T06:41:33.106 INFO:journalctl@ceph.mon.a.vm01.stdout: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-10T06:41:33.106 INFO:journalctl@ceph.mon.a.vm01.stdout: legacy client fscid: -1 2026-03-10T06:41:33.106 INFO:journalctl@ceph.mon.a.vm01.stdout: 2026-03-10T06:41:33.106 INFO:journalctl@ceph.mon.a.vm01.stdout: No filesystems configured 2026-03-10T06:41:33.106 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: mon.a@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-10T06:41:33.106 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T06:41:33.106 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T06:41:33.106 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T06:41:33.106 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: mon.a@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3 2026-03-10T06:41:33.246 INFO:teuthology.orchestra.run.vm01.stdout:Wrote config to /etc/ceph/ceph.conf 2026-03-10T06:41:33.246 INFO:teuthology.orchestra.run.vm01.stdout:Wrote keyring to /etc/ceph/ceph.client.admin.keyring 2026-03-10T06:41:33.246 INFO:teuthology.orchestra.run.vm01.stdout:Creating mgr... 2026-03-10T06:41:33.247 INFO:teuthology.orchestra.run.vm01.stdout:Verifying port 0.0.0.0:9283 ... 2026-03-10T06:41:33.247 INFO:teuthology.orchestra.run.vm01.stdout:Verifying port 0.0.0.0:8765 ... 2026-03-10T06:41:33.362 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T06:41:33.362 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: monmap epoch 1 2026-03-10T06:41:33.362 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: fsid 0204d884-1c4c-11f1-a017-9957fb527c0e 2026-03-10T06:41:33.362 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: last_changed 2026-03-10T06:41:30.998723+0000 2026-03-10T06:41:33.362 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: created 2026-03-10T06:41:30.998723+0000 2026-03-10T06:41:33.362 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: min_mon_release 19 (squid) 2026-03-10T06:41:33.362 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: election_strategy: 1 2026-03-10T06:41:33.362 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-10T06:41:33.362 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: fsmap 2026-03-10T06:41:33.362 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: osdmap e1: 0 total, 0 up, 0 in 2026-03-10T06:41:33.362 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:33 vm01 ceph-mon[50525]: mgrmap e1: no daemons active 2026-03-10T06:41:33.392 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-0204d884-1c4c-11f1-a017-9957fb527c0e@mgr.a 2026-03-10T06:41:33.392 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Failed to reset failed state of unit ceph-0204d884-1c4c-11f1-a017-9957fb527c0e@mgr.a.service: Unit ceph-0204d884-1c4c-11f1-a017-9957fb527c0e@mgr.a.service not loaded. 2026-03-10T06:41:33.524 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-0204d884-1c4c-11f1-a017-9957fb527c0e.target.wants/ceph-0204d884-1c4c-11f1-a017-9957fb527c0e@mgr.a.service → /etc/systemd/system/ceph-0204d884-1c4c-11f1-a017-9957fb527c0e@.service. 2026-03-10T06:41:33.689 INFO:teuthology.orchestra.run.vm01.stdout:firewalld does not appear to be present 2026-03-10T06:41:33.689 INFO:teuthology.orchestra.run.vm01.stdout:Not possible to enable service . firewalld.service is not available 2026-03-10T06:41:33.689 INFO:teuthology.orchestra.run.vm01.stdout:firewalld does not appear to be present 2026-03-10T06:41:33.689 INFO:teuthology.orchestra.run.vm01.stdout:Not possible to open ports <[9283, 8765]>. firewalld.service is not available 2026-03-10T06:41:33.689 INFO:teuthology.orchestra.run.vm01.stdout:Waiting for mgr to start... 2026-03-10T06:41:33.689 INFO:teuthology.orchestra.run.vm01.stdout:Waiting for mgr... 2026-03-10T06:41:33.914 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout { 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "fsid": "0204d884-1c4c-11f1-a017-9957fb527c0e", 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 0 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "a" 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum_age": 0, 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T06:41:32:128356+0000", 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T06:41:32.129639+0000", 2026-03-10T06:41:33.915 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T06:41:33.916 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T06:41:33.916 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T06:41:33.916 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout } 2026-03-10T06:41:33.916 INFO:teuthology.orchestra.run.vm01.stdout:mgr not available, waiting (1/15)... 2026-03-10T06:41:34.512 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:34 vm01 ceph-mon[50525]: from='client.? 192.168.123.101:0/1782926798' entity='client.admin' 2026-03-10T06:41:34.512 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:34 vm01 ceph-mon[50525]: from='client.? 192.168.123.101:0/3888280707' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T06:41:34.512 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:34 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:34.217+0000 7fe5201bf140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T06:41:34.861 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:34 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:34.511+0000 7fe5201bf140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T06:41:34.861 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:34 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T06:41:34.861 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:34 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T06:41:34.861 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:34 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: from numpy import show_config as show_numpy_config 2026-03-10T06:41:34.861 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:34 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:34.592+0000 7fe5201bf140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T06:41:34.861 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:34 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:34.626+0000 7fe5201bf140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T06:41:34.861 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:34 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:34.691+0000 7fe5201bf140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T06:41:35.560 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:35 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:35.155+0000 7fe5201bf140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T06:41:35.560 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:35 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:35.258+0000 7fe5201bf140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T06:41:35.560 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:35 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:35.294+0000 7fe5201bf140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T06:41:35.560 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:35 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:35.326+0000 7fe5201bf140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T06:41:35.560 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:35 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:35.364+0000 7fe5201bf140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T06:41:35.560 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:35 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:35.398+0000 7fe5201bf140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T06:41:35.825 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:35 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:35.559+0000 7fe5201bf140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T06:41:35.826 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:35 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:35.610+0000 7fe5201bf140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T06:41:36.091 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:35 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:35.825+0000 7fe5201bf140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T06:41:36.264 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-10T06:41:36.264 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout { 2026-03-10T06:41:36.264 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "fsid": "0204d884-1c4c-11f1-a017-9957fb527c0e", 2026-03-10T06:41:36.264 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T06:41:36.264 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T06:41:36.264 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T06:41:36.264 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T06:41:36.264 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T06:41:36.264 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T06:41:36.264 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T06:41:36.264 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 0 2026-03-10T06:41:36.264 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-10T06:41:36.264 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T06:41:36.264 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "a" 2026-03-10T06:41:36.264 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-10T06:41:36.264 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum_age": 3, 2026-03-10T06:41:36.264 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T06:41:36.264 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T06:41:36.264 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T06:41:36.264 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T06:41:36.264 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T06:41:36.264 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T06:41:36.264 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T06:41:36.264 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T06:41:36.264 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T06:41:36.264 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T06:41:36.264 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T06:41:36.264 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T06:41:36.264 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T06:41:36.265 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T06:41:36.265 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T06:41:36.265 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T06:41:36.265 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T06:41:36.265 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T06:41:36.265 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T06:41:36.265 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T06:41:36.265 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T06:41:36.265 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T06:41:36.265 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T06:41:36.265 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T06:41:36.265 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T06:41:36.265 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T06:41:36.265 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T06:41:32:128356+0000", 2026-03-10T06:41:36.265 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T06:41:36.265 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T06:41:36.265 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T06:41:36.265 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T06:41:36.265 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-10T06:41:36.265 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T06:41:36.265 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T06:41:36.265 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T06:41:36.265 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T06:41:36.265 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T06:41:36.265 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-10T06:41:36.265 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T06:41:36.265 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T06:41:36.265 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T06:41:36.265 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T06:41:36.265 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T06:41:32.129639+0000", 2026-03-10T06:41:36.265 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T06:41:36.265 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T06:41:36.265 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T06:41:36.265 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout } 2026-03-10T06:41:36.265 INFO:teuthology.orchestra.run.vm01.stdout:mgr not available, waiting (2/15)... 2026-03-10T06:41:36.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:36 vm01 ceph-mon[50525]: from='client.? 192.168.123.101:0/3492422233' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T06:41:36.361 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:36 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:36.090+0000 7fe5201bf140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T06:41:36.361 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:36 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:36.137+0000 7fe5201bf140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T06:41:36.361 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:36 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:36.184+0000 7fe5201bf140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T06:41:36.361 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:36 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:36.278+0000 7fe5201bf140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T06:41:36.361 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:36 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:36.314+0000 7fe5201bf140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T06:41:36.654 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:36 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:36.389+0000 7fe5201bf140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T06:41:36.655 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:36 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:36.494+0000 7fe5201bf140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T06:41:36.655 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:36 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:36.620+0000 7fe5201bf140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T06:41:36.655 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:36 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:36.654+0000 7fe5201bf140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T06:41:37.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:37 vm01 ceph-mon[50525]: Activating manager daemon a 2026-03-10T06:41:37.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:37 vm01 ceph-mon[50525]: mgrmap e2: a(active, starting, since 0.00447196s) 2026-03-10T06:41:37.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:37 vm01 ceph-mon[50525]: from='mgr.14100 192.168.123.101:0/2268688679' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T06:41:37.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:37 vm01 ceph-mon[50525]: from='mgr.14100 192.168.123.101:0/2268688679' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T06:41:37.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:37 vm01 ceph-mon[50525]: from='mgr.14100 192.168.123.101:0/2268688679' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T06:41:37.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:37 vm01 ceph-mon[50525]: from='mgr.14100 192.168.123.101:0/2268688679' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T06:41:37.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:37 vm01 ceph-mon[50525]: from='mgr.14100 192.168.123.101:0/2268688679' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T06:41:37.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:37 vm01 ceph-mon[50525]: Manager daemon a is now available 2026-03-10T06:41:37.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:37 vm01 ceph-mon[50525]: from='mgr.14100 192.168.123.101:0/2268688679' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T06:41:37.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:37 vm01 ceph-mon[50525]: from='mgr.14100 192.168.123.101:0/2268688679' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T06:41:37.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:37 vm01 ceph-mon[50525]: from='mgr.14100 192.168.123.101:0/2268688679' entity='mgr.a' 2026-03-10T06:41:37.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:37 vm01 ceph-mon[50525]: from='mgr.14100 192.168.123.101:0/2268688679' entity='mgr.a' 2026-03-10T06:41:37.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:37 vm01 ceph-mon[50525]: from='mgr.14100 192.168.123.101:0/2268688679' entity='mgr.a' 2026-03-10T06:41:38.548 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-10T06:41:38.548 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout { 2026-03-10T06:41:38.548 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "fsid": "0204d884-1c4c-11f1-a017-9957fb527c0e", 2026-03-10T06:41:38.548 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T06:41:38.548 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T06:41:38.548 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T06:41:38.548 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T06:41:38.548 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T06:41:38.548 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T06:41:38.548 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T06:41:38.548 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 0 2026-03-10T06:41:38.548 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-10T06:41:38.548 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T06:41:38.548 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "a" 2026-03-10T06:41:38.548 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-10T06:41:38.548 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum_age": 5, 2026-03-10T06:41:38.548 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T06:41:38.548 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T06:41:38.548 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T06:41:38.548 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T06:41:38.548 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T06:41:38.548 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T06:41:38.548 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T06:41:38.548 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T06:41:38.548 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T06:41:38.548 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T06:41:38.548 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T06:41:38.548 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T06:41:38.548 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T06:41:38.549 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T06:41:38.549 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T06:41:38.549 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T06:41:38.549 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T06:41:38.549 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T06:41:38.549 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T06:41:38.549 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T06:41:38.549 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T06:41:38.549 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T06:41:38.549 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T06:41:38.549 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T06:41:38.549 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T06:41:38.549 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T06:41:38.549 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T06:41:32:128356+0000", 2026-03-10T06:41:38.549 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T06:41:38.549 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T06:41:38.549 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T06:41:38.549 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T06:41:38.549 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T06:41:38.549 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T06:41:38.549 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T06:41:38.549 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T06:41:38.549 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T06:41:38.549 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T06:41:38.549 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-10T06:41:38.549 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T06:41:38.549 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T06:41:38.549 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T06:41:38.549 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T06:41:38.549 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T06:41:32.129639+0000", 2026-03-10T06:41:38.549 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T06:41:38.550 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T06:41:38.550 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T06:41:38.550 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout } 2026-03-10T06:41:38.550 INFO:teuthology.orchestra.run.vm01.stdout:mgr is available 2026-03-10T06:41:38.786 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-10T06:41:38.786 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout [global] 2026-03-10T06:41:38.786 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout fsid = 0204d884-1c4c-11f1-a017-9957fb527c0e 2026-03-10T06:41:38.786 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-10T06:41:38.786 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.101:3300,v1:192.168.123.101:6789] 2026-03-10T06:41:38.786 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-10T06:41:38.786 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-10T06:41:38.786 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-10T06:41:38.786 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-10T06:41:38.786 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-10T06:41:38.786 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-10T06:41:38.786 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-10T06:41:38.786 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-10T06:41:38.786 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout [osd] 2026-03-10T06:41:38.786 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-10T06:41:38.786 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-10T06:41:38.787 INFO:teuthology.orchestra.run.vm01.stdout:Enabling cephadm module... 2026-03-10T06:41:38.794 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:38 vm01 ceph-mon[50525]: mgrmap e3: a(active, since 1.0077s) 2026-03-10T06:41:38.794 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:38 vm01 ceph-mon[50525]: from='client.? 192.168.123.101:0/1449100352' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T06:41:39.925 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:39 vm01 ceph-mon[50525]: mgrmap e4: a(active, since 2s) 2026-03-10T06:41:39.925 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:39 vm01 ceph-mon[50525]: from='client.? 192.168.123.101:0/4022093861' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T06:41:39.925 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:39 vm01 ceph-mon[50525]: from='client.? 192.168.123.101:0/4022093861' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished 2026-03-10T06:41:39.925 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:39 vm01 ceph-mon[50525]: from='client.? 192.168.123.101:0/3201585128' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-10T06:41:39.925 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:39 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: ignoring --setuser ceph since I am not root 2026-03-10T06:41:39.925 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:39 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: ignoring --setgroup ceph since I am not root 2026-03-10T06:41:39.925 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:39 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:39.882+0000 7fb9552ca140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T06:41:39.925 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:39 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:39.923+0000 7fb9552ca140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T06:41:40.094 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout { 2026-03-10T06:41:40.094 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 5, 2026-03-10T06:41:40.094 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T06:41:40.094 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "active_name": "a", 2026-03-10T06:41:40.094 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-10T06:41:40.094 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout } 2026-03-10T06:41:40.094 INFO:teuthology.orchestra.run.vm01.stdout:Waiting for the mgr to restart... 2026-03-10T06:41:40.094 INFO:teuthology.orchestra.run.vm01.stdout:Waiting for mgr epoch 5... 2026-03-10T06:41:40.611 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:40 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:40.352+0000 7fb9552ca140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T06:41:41.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:40 vm01 ceph-mon[50525]: from='client.? 192.168.123.101:0/3201585128' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-10T06:41:41.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:40 vm01 ceph-mon[50525]: mgrmap e5: a(active, since 3s) 2026-03-10T06:41:41.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:40 vm01 ceph-mon[50525]: from='client.? 192.168.123.101:0/2478665642' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T06:41:41.111 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:40 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:40.653+0000 7fb9552ca140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T06:41:41.111 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:40 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T06:41:41.111 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:40 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T06:41:41.111 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:40 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: from numpy import show_config as show_numpy_config 2026-03-10T06:41:41.111 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:40 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:40.735+0000 7fb9552ca140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T06:41:41.111 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:40 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:40.772+0000 7fb9552ca140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T06:41:41.111 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:40 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:40.838+0000 7fb9552ca140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T06:41:41.611 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:41 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:41.301+0000 7fb9552ca140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T06:41:41.611 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:41 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:41.404+0000 7fb9552ca140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T06:41:41.611 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:41 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:41.441+0000 7fb9552ca140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T06:41:41.611 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:41 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:41.473+0000 7fb9552ca140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T06:41:41.611 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:41 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:41.511+0000 7fb9552ca140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T06:41:41.611 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:41 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:41.545+0000 7fb9552ca140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T06:41:42.111 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:41 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:41.702+0000 7fb9552ca140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T06:41:42.111 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:41 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:41.749+0000 7fb9552ca140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T06:41:42.111 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:41 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:41.950+0000 7fb9552ca140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T06:41:42.556 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:42 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:42.206+0000 7fb9552ca140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T06:41:42.556 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:42 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:42.240+0000 7fb9552ca140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T06:41:42.556 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:42 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:42.279+0000 7fb9552ca140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T06:41:42.556 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:42 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:42.350+0000 7fb9552ca140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T06:41:42.556 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:42 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:42.384+0000 7fb9552ca140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T06:41:42.556 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:42 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:42.455+0000 7fb9552ca140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T06:41:42.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:42 vm01 ceph-mon[50525]: Active manager daemon a restarted 2026-03-10T06:41:42.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:42 vm01 ceph-mon[50525]: Activating manager daemon a 2026-03-10T06:41:42.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:42 vm01 ceph-mon[50525]: osdmap e2: 0 total, 0 up, 0 in 2026-03-10T06:41:42.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:42 vm01 ceph-mon[50525]: mgrmap e6: a(active, starting, since 0.00488149s) 2026-03-10T06:41:42.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:42 vm01 ceph-mon[50525]: from='mgr.14118 192.168.123.101:0/976804778' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T06:41:42.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:42 vm01 ceph-mon[50525]: from='mgr.14118 192.168.123.101:0/976804778' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T06:41:42.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:42 vm01 ceph-mon[50525]: from='mgr.14118 192.168.123.101:0/976804778' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T06:41:42.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:42 vm01 ceph-mon[50525]: from='mgr.14118 192.168.123.101:0/976804778' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T06:41:42.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:42 vm01 ceph-mon[50525]: from='mgr.14118 192.168.123.101:0/976804778' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T06:41:42.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:42 vm01 ceph-mon[50525]: Manager daemon a is now available 2026-03-10T06:41:42.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:42 vm01 ceph-mon[50525]: from='mgr.14118 192.168.123.101:0/976804778' entity='mgr.a' 2026-03-10T06:41:42.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:42 vm01 ceph-mon[50525]: from='mgr.14118 192.168.123.101:0/976804778' entity='mgr.a' 2026-03-10T06:41:42.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:42 vm01 ceph-mon[50525]: from='mgr.14118 192.168.123.101:0/976804778' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:41:42.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:42 vm01 ceph-mon[50525]: from='mgr.14118 192.168.123.101:0/976804778' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:41:42.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:42 vm01 ceph-mon[50525]: from='mgr.14118 192.168.123.101:0/976804778' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T06:41:42.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:42 vm01 ceph-mon[50525]: from='mgr.14118 192.168.123.101:0/976804778' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T06:41:42.861 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:42 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:42.555+0000 7fb9552ca140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T06:41:42.861 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:42 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:42.679+0000 7fb9552ca140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T06:41:42.861 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:42 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:42.711+0000 7fb9552ca140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T06:41:43.784 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout { 2026-03-10T06:41:43.784 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 7, 2026-03-10T06:41:43.784 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-10T06:41:43.784 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout } 2026-03-10T06:41:43.784 INFO:teuthology.orchestra.run.vm01.stdout:mgr epoch 5 is available 2026-03-10T06:41:43.784 INFO:teuthology.orchestra.run.vm01.stdout:Setting orchestrator backend to cephadm... 2026-03-10T06:41:44.031 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:43 vm01 ceph-mon[50525]: Found migration_current of "None". Setting to last migration. 2026-03-10T06:41:44.031 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:43 vm01 ceph-mon[50525]: mgrmap e7: a(active, since 1.00937s) 2026-03-10T06:41:44.307 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout value unchanged 2026-03-10T06:41:44.307 INFO:teuthology.orchestra.run.vm01.stdout:Generating ssh key... 2026-03-10T06:41:44.552 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:44 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: Generating public/private ed25519 key pair. 2026-03-10T06:41:44.552 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:44 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: Your identification has been saved in /tmp/tmp5njmzdxi/key 2026-03-10T06:41:44.552 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:44 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: Your public key has been saved in /tmp/tmp5njmzdxi/key.pub 2026-03-10T06:41:44.552 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:44 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: The key fingerprint is: 2026-03-10T06:41:44.552 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:44 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: SHA256:RNZZD+IOG2LJORioMoCfUHGr6QOoJWLegHn2JbxqWX0 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e 2026-03-10T06:41:44.552 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:44 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: The key's randomart image is: 2026-03-10T06:41:44.552 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:44 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: +--[ED25519 256]--+ 2026-03-10T06:41:44.552 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:44 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: |..ooo o..oo | 2026-03-10T06:41:44.552 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:44 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: |+ .. = = .o. o | 2026-03-10T06:41:44.552 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:44 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: |.+ .o B + . . | 2026-03-10T06:41:44.553 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:44 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: |*.o+ . + = | 2026-03-10T06:41:44.553 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:44 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: |O+* o.. S . | 2026-03-10T06:41:44.553 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:44 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: |=Bo..+. E | 2026-03-10T06:41:44.553 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:44 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: |..o+o . | 2026-03-10T06:41:44.553 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:44 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: | oo | 2026-03-10T06:41:44.553 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:44 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: | .. | 2026-03-10T06:41:44.553 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:44 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: +----[SHA256]-----+ 2026-03-10T06:41:44.813 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:44 vm01 ceph-mon[50525]: from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T06:41:44.813 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:44 vm01 ceph-mon[50525]: from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T06:41:44.813 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:44 vm01 ceph-mon[50525]: from='mgr.14118 192.168.123.101:0/976804778' entity='mgr.a' 2026-03-10T06:41:44.813 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:44 vm01 ceph-mon[50525]: from='mgr.14118 192.168.123.101:0/976804778' entity='mgr.a' 2026-03-10T06:41:44.813 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:44 vm01 ceph-mon[50525]: from='mgr.14118 192.168.123.101:0/976804778' entity='mgr.a' 2026-03-10T06:41:44.813 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:44 vm01 ceph-mon[50525]: from='mgr.14118 192.168.123.101:0/976804778' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:41:44.813 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:44 vm01 ceph-mon[50525]: from='mgr.14118 192.168.123.101:0/976804778' entity='mgr.a' 2026-03-10T06:41:44.813 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:44 vm01 ceph-mon[50525]: from='mgr.14118 192.168.123.101:0/976804778' entity='mgr.a' 2026-03-10T06:41:44.824 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINdTl6QkP3pX8SHsXPh1eUeRqNso2+T62LQYrGcm8OMl ceph-0204d884-1c4c-11f1-a017-9957fb527c0e 2026-03-10T06:41:44.824 INFO:teuthology.orchestra.run.vm01.stdout:Wrote public SSH key to /home/ubuntu/cephtest/ceph.pub 2026-03-10T06:41:44.824 INFO:teuthology.orchestra.run.vm01.stdout:Adding key to root@localhost authorized_keys... 2026-03-10T06:41:44.825 INFO:teuthology.orchestra.run.vm01.stdout:Adding host vm01... 2026-03-10T06:41:46.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:45 vm01 ceph-mon[50525]: from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T06:41:46.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:45 vm01 ceph-mon[50525]: from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T06:41:46.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:45 vm01 ceph-mon[50525]: from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T06:41:46.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:45 vm01 ceph-mon[50525]: Generating ssh key... 2026-03-10T06:41:46.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:45 vm01 ceph-mon[50525]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T06:41:46.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:45 vm01 ceph-mon[50525]: from='mgr.14118 192.168.123.101:0/976804778' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:41:46.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:45 vm01 ceph-mon[50525]: mgrmap e8: a(active, since 2s) 2026-03-10T06:41:46.548 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout Added host 'vm01' with addr '192.168.123.101' 2026-03-10T06:41:46.548 INFO:teuthology.orchestra.run.vm01.stdout:Deploying unmanaged mon service... 2026-03-10T06:41:46.772 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:46 vm01 ceph-mon[50525]: [10/Mar/2026:06:41:44] ENGINE Bus STARTING 2026-03-10T06:41:46.772 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:46 vm01 ceph-mon[50525]: [10/Mar/2026:06:41:44] ENGINE Serving on http://192.168.123.101:8765 2026-03-10T06:41:46.772 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:46 vm01 ceph-mon[50525]: [10/Mar/2026:06:41:45] ENGINE Serving on https://192.168.123.101:7150 2026-03-10T06:41:46.772 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:46 vm01 ceph-mon[50525]: [10/Mar/2026:06:41:45] ENGINE Bus STARTED 2026-03-10T06:41:46.772 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:46 vm01 ceph-mon[50525]: [10/Mar/2026:06:41:45] ENGINE Client ('192.168.123.101', 36512) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T06:41:46.772 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:46 vm01 ceph-mon[50525]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm01", "addr": "192.168.123.101", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T06:41:46.772 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:46 vm01 ceph-mon[50525]: Deploying cephadm binary to vm01 2026-03-10T06:41:46.772 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:46 vm01 ceph-mon[50525]: from='mgr.14118 192.168.123.101:0/976804778' entity='mgr.a' 2026-03-10T06:41:46.772 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:46 vm01 ceph-mon[50525]: from='mgr.14118 192.168.123.101:0/976804778' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:41:46.835 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout Scheduled mon update... 2026-03-10T06:41:46.835 INFO:teuthology.orchestra.run.vm01.stdout:Deploying unmanaged mgr service... 2026-03-10T06:41:47.109 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout Scheduled mgr update... 2026-03-10T06:41:47.629 INFO:teuthology.orchestra.run.vm01.stdout:Enabling the dashboard module... 2026-03-10T06:41:47.924 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:47 vm01 ceph-mon[50525]: Added host vm01 2026-03-10T06:41:47.925 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:47 vm01 ceph-mon[50525]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T06:41:47.925 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:47 vm01 ceph-mon[50525]: Saving service mon spec with placement count:5 2026-03-10T06:41:47.925 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:47 vm01 ceph-mon[50525]: from='mgr.14118 192.168.123.101:0/976804778' entity='mgr.a' 2026-03-10T06:41:47.925 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:47 vm01 ceph-mon[50525]: from='mgr.14118 192.168.123.101:0/976804778' entity='mgr.a' 2026-03-10T06:41:47.925 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:47 vm01 ceph-mon[50525]: from='client.? 192.168.123.101:0/3681262979' entity='client.admin' 2026-03-10T06:41:47.925 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:47 vm01 ceph-mon[50525]: from='client.? 192.168.123.101:0/1790013786' entity='client.admin' 2026-03-10T06:41:48.819 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:48 vm01 ceph-mon[50525]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T06:41:48.819 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:48 vm01 ceph-mon[50525]: Saving service mgr spec with placement count:2 2026-03-10T06:41:48.819 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:48 vm01 ceph-mon[50525]: from='client.? 192.168.123.101:0/66978378' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-10T06:41:48.819 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:48 vm01 ceph-mon[50525]: from='mgr.14118 192.168.123.101:0/976804778' entity='mgr.a' 2026-03-10T06:41:48.819 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:48 vm01 ceph-mon[50525]: from='mgr.14118 192.168.123.101:0/976804778' entity='mgr.a' 2026-03-10T06:41:48.819 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:48 vm01 ceph-mon[50525]: from='mgr.14118 192.168.123.101:0/976804778' entity='mgr.a' 2026-03-10T06:41:49.111 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:48 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: ignoring --setuser ceph since I am not root 2026-03-10T06:41:49.111 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:48 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: ignoring --setgroup ceph since I am not root 2026-03-10T06:41:49.111 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:49 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:49.005+0000 7fdea8d64140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T06:41:49.111 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:49 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:49.052+0000 7fdea8d64140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T06:41:49.230 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout { 2026-03-10T06:41:49.230 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 9, 2026-03-10T06:41:49.230 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T06:41:49.230 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "active_name": "a", 2026-03-10T06:41:49.230 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-10T06:41:49.230 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout } 2026-03-10T06:41:49.230 INFO:teuthology.orchestra.run.vm01.stdout:Waiting for the mgr to restart... 2026-03-10T06:41:49.230 INFO:teuthology.orchestra.run.vm01.stdout:Waiting for mgr epoch 9... 2026-03-10T06:41:49.769 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:49 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:49.469+0000 7fdea8d64140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T06:41:50.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:49 vm01 ceph-mon[50525]: from='client.? 192.168.123.101:0/66978378' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-10T06:41:50.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:49 vm01 ceph-mon[50525]: mgrmap e9: a(active, since 6s) 2026-03-10T06:41:50.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:49 vm01 ceph-mon[50525]: from='client.? 192.168.123.101:0/3928819593' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T06:41:50.111 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:49 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:49.767+0000 7fdea8d64140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T06:41:50.111 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:49 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T06:41:50.111 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:49 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T06:41:50.111 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:49 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: from numpy import show_config as show_numpy_config 2026-03-10T06:41:50.111 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:49 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:49.852+0000 7fdea8d64140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T06:41:50.111 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:49 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:49.886+0000 7fdea8d64140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T06:41:50.111 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:49 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:49.952+0000 7fdea8d64140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T06:41:50.702 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:50 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:50.450+0000 7fdea8d64140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T06:41:50.703 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:50 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:50.557+0000 7fdea8d64140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T06:41:50.703 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:50 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:50.595+0000 7fdea8d64140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T06:41:50.703 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:50 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:50.629+0000 7fdea8d64140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T06:41:50.703 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:50 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:50.667+0000 7fdea8d64140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T06:41:51.111 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:50 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:50.702+0000 7fdea8d64140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T06:41:51.111 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:50 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:50.863+0000 7fdea8d64140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T06:41:51.111 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:50 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:50.912+0000 7fdea8d64140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T06:41:51.377 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:51 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:51.120+0000 7fdea8d64140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T06:41:51.629 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:51 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:51.376+0000 7fdea8d64140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T06:41:51.629 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:51 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:51.413+0000 7fdea8d64140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T06:41:51.629 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:51 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:51.451+0000 7fdea8d64140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T06:41:51.629 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:51 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:51.523+0000 7fdea8d64140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T06:41:51.629 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:51 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:51.556+0000 7fdea8d64140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T06:41:51.889 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:51 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:51.628+0000 7fdea8d64140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T06:41:51.889 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:51 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:51.730+0000 7fdea8d64140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T06:41:51.889 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:51 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:51.854+0000 7fdea8d64140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T06:41:52.284 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:51 vm01 ceph-mon[50525]: Active manager daemon a restarted 2026-03-10T06:41:52.284 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:51 vm01 ceph-mon[50525]: Activating manager daemon a 2026-03-10T06:41:52.284 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:51 vm01 ceph-mon[50525]: osdmap e3: 0 total, 0 up, 0 in 2026-03-10T06:41:52.284 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:51 vm01 ceph-mon[50525]: mgrmap e10: a(active, starting, since 0.00558s) 2026-03-10T06:41:52.284 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:51 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T06:41:52.285 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:51 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T06:41:52.285 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:51 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T06:41:52.285 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:51 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T06:41:52.285 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:51 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T06:41:52.285 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:51 vm01 ceph-mon[50525]: Manager daemon a is now available 2026-03-10T06:41:52.285 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:51 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T06:41:52.285 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:51 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:41:52.285 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:51 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T06:41:52.285 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:41:51 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:41:51.888+0000 7fdea8d64140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T06:41:52.953 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout { 2026-03-10T06:41:52.953 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 11, 2026-03-10T06:41:52.953 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-10T06:41:52.953 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout } 2026-03-10T06:41:52.953 INFO:teuthology.orchestra.run.vm01.stdout:mgr epoch 9 is available 2026-03-10T06:41:52.953 INFO:teuthology.orchestra.run.vm01.stdout:Generating a dashboard self-signed certificate... 2026-03-10T06:41:53.306 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout Self-signed certificate created 2026-03-10T06:41:53.306 INFO:teuthology.orchestra.run.vm01.stdout:Creating initial admin user... 2026-03-10T06:41:53.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:53 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:41:53.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:53 vm01 ceph-mon[50525]: [10/Mar/2026:06:41:52] ENGINE Bus STARTING 2026-03-10T06:41:53.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:53 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:41:53.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:53 vm01 ceph-mon[50525]: [10/Mar/2026:06:41:52] ENGINE Serving on http://192.168.123.101:8765 2026-03-10T06:41:53.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:53 vm01 ceph-mon[50525]: mgrmap e11: a(active, since 1.00913s) 2026-03-10T06:41:53.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:53 vm01 ceph-mon[50525]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T06:41:53.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:53 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:41:53.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:53 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:41:53.699 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout {"username": "admin", "password": "$2b$12$OVtJHhfk26hDoV9WNdR4Iu9Qxa9aLGyBuNVZ4d17QZUyvpRn0Tcmm", "roles": ["administrator"], "name": null, "email": null, "lastUpdate": 1773124913, "enabled": true, "pwdExpirationDate": null, "pwdUpdateRequired": true} 2026-03-10T06:41:53.699 INFO:teuthology.orchestra.run.vm01.stdout:Fetching dashboard port number... 2026-03-10T06:41:53.917 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 8443 2026-03-10T06:41:53.917 INFO:teuthology.orchestra.run.vm01.stdout:firewalld does not appear to be present 2026-03-10T06:41:53.917 INFO:teuthology.orchestra.run.vm01.stdout:Not possible to open ports <[8443]>. firewalld.service is not available 2026-03-10T06:41:53.919 INFO:teuthology.orchestra.run.vm01.stdout:Ceph Dashboard is now available at: 2026-03-10T06:41:53.919 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:41:53.919 INFO:teuthology.orchestra.run.vm01.stdout: URL: https://vm01.local:8443/ 2026-03-10T06:41:53.919 INFO:teuthology.orchestra.run.vm01.stdout: User: admin 2026-03-10T06:41:53.919 INFO:teuthology.orchestra.run.vm01.stdout: Password: imw55qero0 2026-03-10T06:41:53.919 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:41:53.919 INFO:teuthology.orchestra.run.vm01.stdout:Saving cluster configuration to /var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/config directory 2026-03-10T06:41:54.212 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr set mgr/dashboard/cluster/status 2026-03-10T06:41:54.212 INFO:teuthology.orchestra.run.vm01.stdout:You can access the Ceph CLI as following in case of multi-cluster or non-default config: 2026-03-10T06:41:54.212 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:41:54.212 INFO:teuthology.orchestra.run.vm01.stdout: sudo /home/ubuntu/cephtest/cephadm shell --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring 2026-03-10T06:41:54.212 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:41:54.212 INFO:teuthology.orchestra.run.vm01.stdout:Or, if you are only running a single cluster on this host: 2026-03-10T06:41:54.212 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:41:54.212 INFO:teuthology.orchestra.run.vm01.stdout: sudo /home/ubuntu/cephtest/cephadm shell 2026-03-10T06:41:54.212 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:41:54.212 INFO:teuthology.orchestra.run.vm01.stdout:Please consider enabling telemetry to help improve Ceph: 2026-03-10T06:41:54.212 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:41:54.212 INFO:teuthology.orchestra.run.vm01.stdout: ceph telemetry on 2026-03-10T06:41:54.212 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:41:54.212 INFO:teuthology.orchestra.run.vm01.stdout:For more information see: 2026-03-10T06:41:54.212 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:41:54.212 INFO:teuthology.orchestra.run.vm01.stdout: https://docs.ceph.com/en/latest/mgr/telemetry/ 2026-03-10T06:41:54.212 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:41:54.212 INFO:teuthology.orchestra.run.vm01.stdout:Bootstrap complete. 2026-03-10T06:41:54.243 INFO:tasks.cephadm:Fetching config... 2026-03-10T06:41:54.243 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T06:41:54.243 DEBUG:teuthology.orchestra.run.vm01:> dd if=/etc/ceph/ceph.conf of=/dev/stdout 2026-03-10T06:41:54.261 INFO:tasks.cephadm:Fetching client.admin keyring... 2026-03-10T06:41:54.261 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T06:41:54.261 DEBUG:teuthology.orchestra.run.vm01:> dd if=/etc/ceph/ceph.client.admin.keyring of=/dev/stdout 2026-03-10T06:41:54.325 INFO:tasks.cephadm:Fetching mon keyring... 2026-03-10T06:41:54.325 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T06:41:54.325 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/mon.a/keyring of=/dev/stdout 2026-03-10T06:41:54.392 INFO:tasks.cephadm:Fetching pub ssh key... 2026-03-10T06:41:54.392 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T06:41:54.392 DEBUG:teuthology.orchestra.run.vm01:> dd if=/home/ubuntu/cephtest/ceph.pub of=/dev/stdout 2026-03-10T06:41:54.449 INFO:tasks.cephadm:Installing pub ssh key for root users... 2026-03-10T06:41:54.449 DEBUG:teuthology.orchestra.run.vm01:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINdTl6QkP3pX8SHsXPh1eUeRqNso2+T62LQYrGcm8OMl ceph-0204d884-1c4c-11f1-a017-9957fb527c0e' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-10T06:41:54.535 INFO:teuthology.orchestra.run.vm01.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINdTl6QkP3pX8SHsXPh1eUeRqNso2+T62LQYrGcm8OMl ceph-0204d884-1c4c-11f1-a017-9957fb527c0e 2026-03-10T06:41:54.556 DEBUG:teuthology.orchestra.run.vm08:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINdTl6QkP3pX8SHsXPh1eUeRqNso2+T62LQYrGcm8OMl ceph-0204d884-1c4c-11f1-a017-9957fb527c0e' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-10T06:41:54.589 INFO:teuthology.orchestra.run.vm08.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINdTl6QkP3pX8SHsXPh1eUeRqNso2+T62LQYrGcm8OMl ceph-0204d884-1c4c-11f1-a017-9957fb527c0e 2026-03-10T06:41:54.597 DEBUG:teuthology.orchestra.run.vm09:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINdTl6QkP3pX8SHsXPh1eUeRqNso2+T62LQYrGcm8OMl ceph-0204d884-1c4c-11f1-a017-9957fb527c0e' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-10T06:41:54.631 INFO:teuthology.orchestra.run.vm09.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINdTl6QkP3pX8SHsXPh1eUeRqNso2+T62LQYrGcm8OMl ceph-0204d884-1c4c-11f1-a017-9957fb527c0e 2026-03-10T06:41:54.641 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e -- ceph config set mgr mgr/cephadm/allow_ptrace true 2026-03-10T06:41:54.822 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/mon.a/config 2026-03-10T06:41:54.843 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:54 vm01 ceph-mon[50525]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T06:41:54.844 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:54 vm01 ceph-mon[50525]: [10/Mar/2026:06:41:52] ENGINE Serving on https://192.168.123.101:7150 2026-03-10T06:41:54.844 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:54 vm01 ceph-mon[50525]: [10/Mar/2026:06:41:52] ENGINE Bus STARTED 2026-03-10T06:41:54.844 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:54 vm01 ceph-mon[50525]: [10/Mar/2026:06:41:52] ENGINE Client ('192.168.123.101', 44076) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T06:41:54.844 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:54 vm01 ceph-mon[50525]: from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T06:41:54.844 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:54 vm01 ceph-mon[50525]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T06:41:54.844 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:54 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:41:54.844 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:54 vm01 ceph-mon[50525]: from='client.? 192.168.123.101:0/426657034' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-10T06:41:54.844 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:54 vm01 ceph-mon[50525]: from='client.? 192.168.123.101:0/1745436210' entity='client.admin' 2026-03-10T06:41:55.100 INFO:tasks.cephadm:Distributing conf and client.admin keyring to all hosts + 0755 2026-03-10T06:41:55.100 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e -- ceph orch client-keyring set client.admin '*' --mode 0755 2026-03-10T06:41:55.297 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/mon.a/config 2026-03-10T06:41:55.588 INFO:tasks.cephadm:Writing (initial) conf and keyring to vm08 2026-03-10T06:41:55.588 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T06:41:55.588 DEBUG:teuthology.orchestra.run.vm08:> dd of=/etc/ceph/ceph.conf 2026-03-10T06:41:55.603 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T06:41:55.603 DEBUG:teuthology.orchestra.run.vm08:> dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T06:41:55.657 INFO:tasks.cephadm:Adding host vm08 to orchestrator... 2026-03-10T06:41:55.657 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e -- ceph orch host add vm08 2026-03-10T06:41:55.826 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/mon.a/config 2026-03-10T06:41:55.941 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:55 vm01 ceph-mon[50525]: mgrmap e12: a(active, since 2s) 2026-03-10T06:41:55.941 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:55 vm01 ceph-mon[50525]: from='client.? 192.168.123.101:0/1118509722' entity='client.admin' 2026-03-10T06:41:55.941 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:55 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:41:55.941 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:55 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:41:55.941 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:55 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-10T06:41:55.941 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:55 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:41:55.941 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:55 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:41:55.941 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:55 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:41:55.941 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:55 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:41:55.941 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:55 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:41:55.941 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:55 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:41:55.941 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:55 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T06:41:57.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:56 vm01 ceph-mon[50525]: from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T06:41:57.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:56 vm01 ceph-mon[50525]: Updating vm01:/etc/ceph/ceph.conf 2026-03-10T06:41:57.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:56 vm01 ceph-mon[50525]: Updating vm01:/var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/config/ceph.conf 2026-03-10T06:41:57.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:56 vm01 ceph-mon[50525]: Updating vm01:/etc/ceph/ceph.client.admin.keyring 2026-03-10T06:41:57.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:56 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:41:57.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:56 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:41:57.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:56 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:41:57.622 INFO:teuthology.orchestra.run.vm01.stdout:Added host 'vm08' with addr '192.168.123.108' 2026-03-10T06:41:57.672 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e -- ceph orch host ls --format=json 2026-03-10T06:41:57.835 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/mon.a/config 2026-03-10T06:41:57.873 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:57 vm01 ceph-mon[50525]: Updating vm01:/var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/config/ceph.client.admin.keyring 2026-03-10T06:41:57.873 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:57 vm01 ceph-mon[50525]: from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm08", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T06:41:57.873 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:57 vm01 ceph-mon[50525]: Deploying cephadm binary to vm08 2026-03-10T06:41:57.873 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:57 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:41:57.874 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:57 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:41:58.050 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:41:58.050 INFO:teuthology.orchestra.run.vm01.stdout:[{"addr": "192.168.123.101", "hostname": "vm01", "labels": [], "status": ""}, {"addr": "192.168.123.108", "hostname": "vm08", "labels": [], "status": ""}] 2026-03-10T06:41:58.115 INFO:tasks.cephadm:Writing (initial) conf and keyring to vm09 2026-03-10T06:41:58.116 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T06:41:58.116 DEBUG:teuthology.orchestra.run.vm09:> dd of=/etc/ceph/ceph.conf 2026-03-10T06:41:58.133 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T06:41:58.133 DEBUG:teuthology.orchestra.run.vm09:> dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T06:41:58.190 INFO:tasks.cephadm:Adding host vm09 to orchestrator... 2026-03-10T06:41:58.190 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e -- ceph orch host add vm09 2026-03-10T06:41:58.344 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/mon.a/config 2026-03-10T06:41:59.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:59 vm01 ceph-mon[50525]: Added host vm08 2026-03-10T06:41:59.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:59 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:41:59.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:41:59 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:00.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:00 vm01 ceph-mon[50525]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T06:42:00.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:00 vm01 ceph-mon[50525]: from='client.14178 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm09", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T06:42:00.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:00 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:00.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:00 vm01 ceph-mon[50525]: mgrmap e13: a(active, since 7s) 2026-03-10T06:42:00.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:00 vm01 ceph-mon[50525]: Deploying cephadm binary to vm09 2026-03-10T06:42:00.680 INFO:teuthology.orchestra.run.vm01.stdout:Added host 'vm09' with addr '192.168.123.109' 2026-03-10T06:42:00.729 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e -- ceph orch host ls --format=json 2026-03-10T06:42:00.891 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/mon.a/config 2026-03-10T06:42:01.128 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:42:01.128 INFO:teuthology.orchestra.run.vm01.stdout:[{"addr": "192.168.123.101", "hostname": "vm01", "labels": [], "status": ""}, {"addr": "192.168.123.108", "hostname": "vm08", "labels": [], "status": ""}, {"addr": "192.168.123.109", "hostname": "vm09", "labels": [], "status": ""}] 2026-03-10T06:42:01.176 INFO:tasks.cephadm:Setting crush tunables to default 2026-03-10T06:42:01.176 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e -- ceph osd crush tunables default 2026-03-10T06:42:01.343 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/mon.a/config 2026-03-10T06:42:01.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:01 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:01.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:01 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:01.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:01 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:01.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:01 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:01.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:01 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm08", "name": "osd_memory_target"}]: dispatch 2026-03-10T06:42:01.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:01 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:01.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:01 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T06:42:01.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:01 vm01 ceph-mon[50525]: Updating vm08:/etc/ceph/ceph.conf 2026-03-10T06:42:01.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:01 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:01.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:01 vm01 ceph-mon[50525]: Added host vm09 2026-03-10T06:42:01.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:01 vm01 ceph-mon[50525]: Updating vm08:/var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/config/ceph.conf 2026-03-10T06:42:01.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:01 vm01 ceph-mon[50525]: Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-10T06:42:01.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:01 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:01.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:01 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:01.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:01 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:01.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:01 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:42:01.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:01 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:01.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:01 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:01.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:01 vm01 ceph-mon[50525]: from='client.? 192.168.123.101:0/4030316594' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-10T06:42:02.476 INFO:teuthology.orchestra.run.vm01.stderr:adjusted tunables profile to default 2026-03-10T06:42:02.529 INFO:tasks.cephadm:Adding mon.a on vm01 2026-03-10T06:42:02.529 INFO:tasks.cephadm:Adding mon.b on vm08 2026-03-10T06:42:02.530 INFO:tasks.cephadm:Adding mon.c on vm09 2026-03-10T06:42:02.530 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e -- ceph orch apply mon '3;vm01:192.168.123.101=a;vm08:192.168.123.108=b;vm09:192.168.123.109=c' 2026-03-10T06:42:02.690 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T06:42:02.730 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T06:42:02.985 INFO:teuthology.orchestra.run.vm09.stdout:Scheduled mon update... 2026-03-10T06:42:03.061 DEBUG:teuthology.orchestra.run.vm08:mon.b> sudo journalctl -f -n 0 -u ceph-0204d884-1c4c-11f1-a017-9957fb527c0e@mon.b.service 2026-03-10T06:42:03.063 DEBUG:teuthology.orchestra.run.vm09:mon.c> sudo journalctl -f -n 0 -u ceph-0204d884-1c4c-11f1-a017-9957fb527c0e@mon.c.service 2026-03-10T06:42:03.064 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-10T06:42:03.064 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e -- ceph mon dump -f json 2026-03-10T06:42:03.276 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T06:42:03.322 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T06:42:03.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:02 vm01 ceph-mon[50525]: Updating vm08:/var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/config/ceph.client.admin.keyring 2026-03-10T06:42:03.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:02 vm01 ceph-mon[50525]: from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T06:42:03.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:02 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:03.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:02 vm01 ceph-mon[50525]: from='client.? 192.168.123.101:0/4030316594' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-10T06:42:03.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:02 vm01 ceph-mon[50525]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T06:42:03.593 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:42:03.593 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"0204d884-1c4c-11f1-a017-9957fb527c0e","modified":"2026-03-10T06:41:30.998723Z","created":"2026-03-10T06:41:30.998723Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:3300","nonce":0},{"type":"v1","addr":"192.168.123.101:6789","nonce":0}]},"addr":"192.168.123.101:6789/0","public_addr":"192.168.123.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T06:42:03.593 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-10T06:42:04.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:03 vm01 ceph-mon[50525]: from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm01:192.168.123.101=a;vm08:192.168.123.108=b;vm09:192.168.123.109=c", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T06:42:04.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:03 vm01 ceph-mon[50525]: Saving service mon spec with placement vm01:192.168.123.101=a;vm08:192.168.123.108=b;vm09:192.168.123.109=c;count:3 2026-03-10T06:42:04.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:03 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:04.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:03 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:04.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:03 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:04.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:03 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:04.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:03 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:04.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:03 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-10T06:42:04.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:03 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:04.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:03 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T06:42:04.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:03 vm01 ceph-mon[50525]: Updating vm09:/etc/ceph/ceph.conf 2026-03-10T06:42:04.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:03 vm01 ceph-mon[50525]: Updating vm09:/var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/config/ceph.conf 2026-03-10T06:42:04.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:03 vm01 ceph-mon[50525]: Updating vm09:/etc/ceph/ceph.client.admin.keyring 2026-03-10T06:42:04.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:03 vm01 ceph-mon[50525]: Updating vm09:/var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/config/ceph.client.admin.keyring 2026-03-10T06:42:04.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:03 vm01 ceph-mon[50525]: from='client.? 192.168.123.109:0/3461392987' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T06:42:04.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:03 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:04.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:03 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:04.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:03 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:04.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:03 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T06:42:04.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:03 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:04.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:03 vm01 ceph-mon[50525]: Deploying daemon mon.c on vm09 2026-03-10T06:42:04.642 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-10T06:42:04.642 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e -- ceph mon dump -f json 2026-03-10T06:42:04.848 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/mon.c/config 2026-03-10T06:42:05.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:05 vm09 ceph-mon[55409]: mon.c@-1(synchronizing).paxosservice(auth 1..3) refresh upgraded, format 0 -> 3 2026-03-10T06:42:06.644 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:06 vm08 ceph-mon[52886]: mon.b@-1(synchronizing).mgr e13 mkfs or daemon transitioned to available, loading commands 2026-03-10T06:42:10.128 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:42:10.128 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":2,"fsid":"0204d884-1c4c-11f1-a017-9957fb527c0e","modified":"2026-03-10T06:42:05.106165Z","created":"2026-03-10T06:41:30.998723Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:3300","nonce":0},{"type":"v1","addr":"192.168.123.101:6789","nonce":0}]},"addr":"192.168.123.101:6789/0","public_addr":"192.168.123.101:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"c","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:3300","nonce":0},{"type":"v1","addr":"192.168.123.109:6789","nonce":0}]},"addr":"192.168.123.109:6789/0","public_addr":"192.168.123.109:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1]} 2026-03-10T06:42:10.128 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 2 2026-03-10T06:42:10.482 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:10 vm09 ceph-mon[55409]: Deploying daemon mon.b on vm08 2026-03-10T06:42:10.482 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:10 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T06:42:10.482 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:10 vm09 ceph-mon[55409]: mon.a calling monitor election 2026-03-10T06:42:10.482 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:10 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T06:42:10.482 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:10 vm09 ceph-mon[55409]: from='client.? 192.168.123.109:0/3630207795' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T06:42:10.482 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:10 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T06:42:10.482 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:10 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T06:42:10.482 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:10 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T06:42:10.482 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:10 vm09 ceph-mon[55409]: mon.c calling monitor election 2026-03-10T06:42:10.482 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:10 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T06:42:10.482 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:10 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T06:42:10.482 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:10 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T06:42:10.482 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:10 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T06:42:10.482 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:10 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T06:42:10.483 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:10 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T06:42:10.483 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:10 vm09 ceph-mon[55409]: mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-10T06:42:10.483 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:10 vm09 ceph-mon[55409]: monmap epoch 2 2026-03-10T06:42:10.483 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:10 vm09 ceph-mon[55409]: fsid 0204d884-1c4c-11f1-a017-9957fb527c0e 2026-03-10T06:42:10.483 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:10 vm09 ceph-mon[55409]: last_changed 2026-03-10T06:42:05.106165+0000 2026-03-10T06:42:10.483 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:10 vm09 ceph-mon[55409]: created 2026-03-10T06:41:30.998723+0000 2026-03-10T06:42:10.483 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:10 vm09 ceph-mon[55409]: min_mon_release 19 (squid) 2026-03-10T06:42:10.483 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:10 vm09 ceph-mon[55409]: election_strategy: 1 2026-03-10T06:42:10.483 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:10 vm09 ceph-mon[55409]: 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-10T06:42:10.483 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:10 vm09 ceph-mon[55409]: 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.c 2026-03-10T06:42:10.483 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:10 vm09 ceph-mon[55409]: fsmap 2026-03-10T06:42:10.483 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:10 vm09 ceph-mon[55409]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T06:42:10.483 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:10 vm09 ceph-mon[55409]: mgrmap e13: a(active, since 18s) 2026-03-10T06:42:10.483 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:10 vm09 ceph-mon[55409]: overall HEALTH_OK 2026-03-10T06:42:10.483 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:10 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:10.483 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:10 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:10.483 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:10 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:10.483 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:10 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:10.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:10 vm01 ceph-mon[50525]: Deploying daemon mon.b on vm08 2026-03-10T06:42:10.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:10 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T06:42:10.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:10 vm01 ceph-mon[50525]: mon.a calling monitor election 2026-03-10T06:42:10.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:10 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T06:42:10.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:10 vm01 ceph-mon[50525]: from='client.? 192.168.123.109:0/3630207795' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T06:42:10.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:10 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T06:42:10.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:10 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T06:42:10.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:10 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T06:42:10.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:10 vm01 ceph-mon[50525]: mon.c calling monitor election 2026-03-10T06:42:10.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:10 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T06:42:10.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:10 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T06:42:10.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:10 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T06:42:10.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:10 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T06:42:10.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:10 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T06:42:10.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:10 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T06:42:10.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:10 vm01 ceph-mon[50525]: mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-10T06:42:10.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:10 vm01 ceph-mon[50525]: monmap epoch 2 2026-03-10T06:42:10.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:10 vm01 ceph-mon[50525]: fsid 0204d884-1c4c-11f1-a017-9957fb527c0e 2026-03-10T06:42:10.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:10 vm01 ceph-mon[50525]: last_changed 2026-03-10T06:42:05.106165+0000 2026-03-10T06:42:10.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:10 vm01 ceph-mon[50525]: created 2026-03-10T06:41:30.998723+0000 2026-03-10T06:42:10.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:10 vm01 ceph-mon[50525]: min_mon_release 19 (squid) 2026-03-10T06:42:10.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:10 vm01 ceph-mon[50525]: election_strategy: 1 2026-03-10T06:42:10.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:10 vm01 ceph-mon[50525]: 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-10T06:42:10.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:10 vm01 ceph-mon[50525]: 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.c 2026-03-10T06:42:10.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:10 vm01 ceph-mon[50525]: fsmap 2026-03-10T06:42:10.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:10 vm01 ceph-mon[50525]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T06:42:10.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:10 vm01 ceph-mon[50525]: mgrmap e13: a(active, since 18s) 2026-03-10T06:42:10.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:10 vm01 ceph-mon[50525]: overall HEALTH_OK 2026-03-10T06:42:10.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:10 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:10.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:10 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:10.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:10 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:10.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:10 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:11.202 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-10T06:42:11.202 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e -- ceph mon dump -f json 2026-03-10T06:42:11.361 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:42:11 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:42:11.105+0000 7fde750cf640 -1 mgr.server handle_report got status from non-daemon mon.c 2026-03-10T06:42:11.368 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/mon.c/config 2026-03-10T06:42:15.857 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:15 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T06:42:15.857 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:15 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T06:42:15.857 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:15 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T06:42:15.857 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:15 vm09 ceph-mon[55409]: mon.a calling monitor election 2026-03-10T06:42:15.857 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:15 vm09 ceph-mon[55409]: mon.c calling monitor election 2026-03-10T06:42:15.858 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:15 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T06:42:15.858 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:15 vm09 ceph-mon[55409]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T06:42:15.858 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:15 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T06:42:15.858 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:15 vm09 ceph-mon[55409]: mon.b calling monitor election 2026-03-10T06:42:15.858 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:15 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T06:42:15.858 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:15 vm09 ceph-mon[55409]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T06:42:15.858 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:15 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T06:42:15.858 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:15 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T06:42:15.858 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:15 vm09 ceph-mon[55409]: mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T06:42:15.858 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:15 vm09 ceph-mon[55409]: monmap epoch 3 2026-03-10T06:42:15.858 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:15 vm09 ceph-mon[55409]: fsid 0204d884-1c4c-11f1-a017-9957fb527c0e 2026-03-10T06:42:15.858 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:15 vm09 ceph-mon[55409]: last_changed 2026-03-10T06:42:10.421280+0000 2026-03-10T06:42:15.858 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:15 vm09 ceph-mon[55409]: created 2026-03-10T06:41:30.998723+0000 2026-03-10T06:42:15.858 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:15 vm09 ceph-mon[55409]: min_mon_release 19 (squid) 2026-03-10T06:42:15.858 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:15 vm09 ceph-mon[55409]: election_strategy: 1 2026-03-10T06:42:15.858 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:15 vm09 ceph-mon[55409]: 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-10T06:42:15.858 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:15 vm09 ceph-mon[55409]: 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.c 2026-03-10T06:42:15.858 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:15 vm09 ceph-mon[55409]: 2: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.b 2026-03-10T06:42:15.858 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:15 vm09 ceph-mon[55409]: fsmap 2026-03-10T06:42:15.858 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:15 vm09 ceph-mon[55409]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T06:42:15.858 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:15 vm09 ceph-mon[55409]: mgrmap e13: a(active, since 23s) 2026-03-10T06:42:15.858 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:15 vm09 ceph-mon[55409]: overall HEALTH_OK 2026-03-10T06:42:15.858 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:15 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:15.858 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:15 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:15.858 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:15 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:15.858 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:15 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:15.858 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:15 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:15.858 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:15 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:15.858 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:15 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T06:42:15.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:15 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T06:42:15.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:15 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T06:42:15.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:15 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T06:42:15.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:15 vm01 ceph-mon[50525]: mon.a calling monitor election 2026-03-10T06:42:15.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:15 vm01 ceph-mon[50525]: mon.c calling monitor election 2026-03-10T06:42:15.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:15 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T06:42:15.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:15 vm01 ceph-mon[50525]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T06:42:15.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:15 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T06:42:15.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:15 vm01 ceph-mon[50525]: mon.b calling monitor election 2026-03-10T06:42:15.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:15 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T06:42:15.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:15 vm01 ceph-mon[50525]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T06:42:15.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:15 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T06:42:15.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:15 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T06:42:15.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:15 vm01 ceph-mon[50525]: mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T06:42:15.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:15 vm01 ceph-mon[50525]: monmap epoch 3 2026-03-10T06:42:15.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:15 vm01 ceph-mon[50525]: fsid 0204d884-1c4c-11f1-a017-9957fb527c0e 2026-03-10T06:42:15.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:15 vm01 ceph-mon[50525]: last_changed 2026-03-10T06:42:10.421280+0000 2026-03-10T06:42:15.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:15 vm01 ceph-mon[50525]: created 2026-03-10T06:41:30.998723+0000 2026-03-10T06:42:15.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:15 vm01 ceph-mon[50525]: min_mon_release 19 (squid) 2026-03-10T06:42:15.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:15 vm01 ceph-mon[50525]: election_strategy: 1 2026-03-10T06:42:15.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:15 vm01 ceph-mon[50525]: 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-10T06:42:15.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:15 vm01 ceph-mon[50525]: 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.c 2026-03-10T06:42:15.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:15 vm01 ceph-mon[50525]: 2: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.b 2026-03-10T06:42:15.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:15 vm01 ceph-mon[50525]: fsmap 2026-03-10T06:42:15.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:15 vm01 ceph-mon[50525]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T06:42:15.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:15 vm01 ceph-mon[50525]: mgrmap e13: a(active, since 23s) 2026-03-10T06:42:15.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:15 vm01 ceph-mon[50525]: overall HEALTH_OK 2026-03-10T06:42:15.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:15 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:15.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:15 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:15.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:15 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:15.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:15 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:15.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:15 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:15.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:15 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:15.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:15 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T06:42:15.895 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: Deploying daemon mon.b on vm08 2026-03-10T06:42:15.895 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T06:42:15.895 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: mon.a calling monitor election 2026-03-10T06:42:15.895 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T06:42:15.895 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: from='client.? 192.168.123.109:0/3630207795' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T06:42:15.895 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T06:42:15.895 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T06:42:15.895 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T06:42:15.895 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: mon.c calling monitor election 2026-03-10T06:42:15.895 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T06:42:15.895 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T06:42:15.895 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T06:42:15.895 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T06:42:15.895 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T06:42:15.895 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T06:42:15.895 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-10T06:42:15.895 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: monmap epoch 2 2026-03-10T06:42:15.895 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: fsid 0204d884-1c4c-11f1-a017-9957fb527c0e 2026-03-10T06:42:15.895 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: last_changed 2026-03-10T06:42:05.106165+0000 2026-03-10T06:42:15.895 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: created 2026-03-10T06:41:30.998723+0000 2026-03-10T06:42:15.895 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: min_mon_release 19 (squid) 2026-03-10T06:42:15.895 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: election_strategy: 1 2026-03-10T06:42:15.895 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-10T06:42:15.895 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.c 2026-03-10T06:42:15.895 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: fsmap 2026-03-10T06:42:15.895 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T06:42:15.895 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: mgrmap e13: a(active, since 18s) 2026-03-10T06:42:15.895 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: overall HEALTH_OK 2026-03-10T06:42:15.895 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:15.896 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:15.896 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:15.896 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:15.896 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T06:42:15.896 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T06:42:15.896 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T06:42:15.896 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: mon.a calling monitor election 2026-03-10T06:42:15.896 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: mon.c calling monitor election 2026-03-10T06:42:15.896 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T06:42:15.896 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T06:42:15.896 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T06:42:15.896 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: mon.b calling monitor election 2026-03-10T06:42:15.896 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T06:42:15.896 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T06:42:15.896 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T06:42:15.896 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T06:42:15.896 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T06:42:15.896 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: monmap epoch 3 2026-03-10T06:42:15.896 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: fsid 0204d884-1c4c-11f1-a017-9957fb527c0e 2026-03-10T06:42:15.896 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: last_changed 2026-03-10T06:42:10.421280+0000 2026-03-10T06:42:15.896 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: created 2026-03-10T06:41:30.998723+0000 2026-03-10T06:42:15.896 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: min_mon_release 19 (squid) 2026-03-10T06:42:15.896 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: election_strategy: 1 2026-03-10T06:42:15.896 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-10T06:42:15.896 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.c 2026-03-10T06:42:15.896 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: 2: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.b 2026-03-10T06:42:15.896 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: fsmap 2026-03-10T06:42:15.896 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T06:42:15.896 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: mgrmap e13: a(active, since 23s) 2026-03-10T06:42:15.896 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: overall HEALTH_OK 2026-03-10T06:42:15.896 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:15.896 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:15.896 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:15.896 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:15.896 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:15.896 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:15.896 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:15 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T06:42:16.029 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:42:16.029 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":3,"fsid":"0204d884-1c4c-11f1-a017-9957fb527c0e","modified":"2026-03-10T06:42:10.421280Z","created":"2026-03-10T06:41:30.998723Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:3300","nonce":0},{"type":"v1","addr":"192.168.123.101:6789","nonce":0}]},"addr":"192.168.123.101:6789/0","public_addr":"192.168.123.101:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"c","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:3300","nonce":0},{"type":"v1","addr":"192.168.123.109:6789","nonce":0}]},"addr":"192.168.123.109:6789/0","public_addr":"192.168.123.109:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:3300","nonce":0},{"type":"v1","addr":"192.168.123.108:6789","nonce":0}]},"addr":"192.168.123.108:6789/0","public_addr":"192.168.123.108:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]} 2026-03-10T06:42:16.029 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 3 2026-03-10T06:42:16.099 INFO:tasks.cephadm:Generating final ceph.conf file... 2026-03-10T06:42:16.099 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e -- ceph config generate-minimal-conf 2026-03-10T06:42:16.281 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/mon.a/config 2026-03-10T06:42:16.513 INFO:teuthology.orchestra.run.vm01.stdout:# minimal ceph.conf for 0204d884-1c4c-11f1-a017-9957fb527c0e 2026-03-10T06:42:16.513 INFO:teuthology.orchestra.run.vm01.stdout:[global] 2026-03-10T06:42:16.513 INFO:teuthology.orchestra.run.vm01.stdout: fsid = 0204d884-1c4c-11f1-a017-9957fb527c0e 2026-03-10T06:42:16.513 INFO:teuthology.orchestra.run.vm01.stdout: mon_host = [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] 2026-03-10T06:42:16.589 INFO:tasks.cephadm:Distributing (final) config and client.admin keyring... 2026-03-10T06:42:16.589 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T06:42:16.589 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/etc/ceph/ceph.conf 2026-03-10T06:42:16.617 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T06:42:16.617 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T06:42:16.691 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T06:42:16.691 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/etc/ceph/ceph.conf 2026-03-10T06:42:16.724 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T06:42:16.724 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T06:42:16.792 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T06:42:16.792 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/ceph/ceph.conf 2026-03-10T06:42:16.821 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T06:42:16.821 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T06:42:16.858 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:16 vm09 ceph-mon[55409]: Updating vm01:/etc/ceph/ceph.conf 2026-03-10T06:42:16.858 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:16 vm09 ceph-mon[55409]: Updating vm08:/etc/ceph/ceph.conf 2026-03-10T06:42:16.858 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:16 vm09 ceph-mon[55409]: Updating vm09:/etc/ceph/ceph.conf 2026-03-10T06:42:16.858 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:16 vm09 ceph-mon[55409]: Updating vm08:/var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/config/ceph.conf 2026-03-10T06:42:16.858 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:16 vm09 ceph-mon[55409]: Updating vm09:/var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/config/ceph.conf 2026-03-10T06:42:16.858 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:16 vm09 ceph-mon[55409]: Updating vm01:/var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/config/ceph.conf 2026-03-10T06:42:16.858 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:16 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:16.858 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:16 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:16.858 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:16 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:16.858 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:16 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:16.858 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:16 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:16.858 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:16 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:16.858 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:16 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:16.858 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:16 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:16.858 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:16 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:16.858 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:16 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:16.858 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:16 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:16.859 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:16 vm09 ceph-mon[55409]: Reconfiguring mon.a (unknown last config time)... 2026-03-10T06:42:16.859 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:16 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T06:42:16.859 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:16 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T06:42:16.859 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:16 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:16.859 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:16 vm09 ceph-mon[55409]: Reconfiguring daemon mon.a on vm01 2026-03-10T06:42:16.859 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:16 vm09 ceph-mon[55409]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T06:42:16.859 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:16 vm09 ceph-mon[55409]: from='client.? 192.168.123.109:0/335334073' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T06:42:16.859 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:16 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:16.859 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:16 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:16.859 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:16 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T06:42:16.859 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:16 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T06:42:16.859 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:16 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:16.859 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:16 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:16.859 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:16 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:16.859 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:16 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T06:42:16.859 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:16 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T06:42:16.859 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:16 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:16.859 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:16 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T06:42:16.859 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:16 vm09 ceph-mon[55409]: from='client.? 192.168.123.101:0/2051112785' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:16.859 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:16 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:16.859 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:16 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:16.859 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:16 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:42:16.859 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:16 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:16.859 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:16 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T06:42:16.859 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:16 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:16.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:16 vm01 ceph-mon[50525]: Updating vm01:/etc/ceph/ceph.conf 2026-03-10T06:42:16.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:16 vm01 ceph-mon[50525]: Updating vm08:/etc/ceph/ceph.conf 2026-03-10T06:42:16.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:16 vm01 ceph-mon[50525]: Updating vm09:/etc/ceph/ceph.conf 2026-03-10T06:42:16.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:16 vm01 ceph-mon[50525]: Updating vm08:/var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/config/ceph.conf 2026-03-10T06:42:16.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:16 vm01 ceph-mon[50525]: Updating vm09:/var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/config/ceph.conf 2026-03-10T06:42:16.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:16 vm01 ceph-mon[50525]: Updating vm01:/var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/config/ceph.conf 2026-03-10T06:42:16.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:16 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:16.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:16 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:16.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:16 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:16.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:16 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:16.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:16 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:16.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:16 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:16.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:16 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:16.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:16 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:16.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:16 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:16.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:16 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:16.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:16 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:16.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:16 vm01 ceph-mon[50525]: Reconfiguring mon.a (unknown last config time)... 2026-03-10T06:42:16.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:16 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T06:42:16.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:16 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T06:42:16.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:16 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:16.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:16 vm01 ceph-mon[50525]: Reconfiguring daemon mon.a on vm01 2026-03-10T06:42:16.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:16 vm01 ceph-mon[50525]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T06:42:16.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:16 vm01 ceph-mon[50525]: from='client.? 192.168.123.109:0/335334073' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T06:42:16.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:16 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:16.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:16 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:16.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:16 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T06:42:16.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:16 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T06:42:16.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:16 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:16.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:16 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:16.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:16 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:16.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:16 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T06:42:16.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:16 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T06:42:16.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:16 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:16.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:16 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T06:42:16.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:16 vm01 ceph-mon[50525]: from='client.? 192.168.123.101:0/2051112785' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:16.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:16 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:16.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:16 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:16.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:16 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:42:16.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:16 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:16.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:16 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T06:42:16.862 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:16 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:16.885 INFO:tasks.cephadm:Adding mgr.a on vm01 2026-03-10T06:42:16.886 INFO:tasks.cephadm:Adding mgr.b on vm08 2026-03-10T06:42:16.886 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e -- ceph orch apply mgr '2;vm01=a;vm08=b' 2026-03-10T06:42:17.095 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/mon.c/config 2026-03-10T06:42:17.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:16 vm08 ceph-mon[52886]: Updating vm01:/etc/ceph/ceph.conf 2026-03-10T06:42:17.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:16 vm08 ceph-mon[52886]: Updating vm08:/etc/ceph/ceph.conf 2026-03-10T06:42:17.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:16 vm08 ceph-mon[52886]: Updating vm09:/etc/ceph/ceph.conf 2026-03-10T06:42:17.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:16 vm08 ceph-mon[52886]: Updating vm08:/var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/config/ceph.conf 2026-03-10T06:42:17.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:16 vm08 ceph-mon[52886]: Updating vm09:/var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/config/ceph.conf 2026-03-10T06:42:17.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:16 vm08 ceph-mon[52886]: Updating vm01:/var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/config/ceph.conf 2026-03-10T06:42:17.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:16 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:17.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:16 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:17.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:16 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:17.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:16 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:17.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:16 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:17.145 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:16 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:17.145 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:16 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:17.145 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:16 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:17.145 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:16 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:17.145 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:16 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:17.145 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:16 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:17.145 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:16 vm08 ceph-mon[52886]: Reconfiguring mon.a (unknown last config time)... 2026-03-10T06:42:17.145 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:16 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T06:42:17.145 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:16 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T06:42:17.145 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:16 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:17.145 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:16 vm08 ceph-mon[52886]: Reconfiguring daemon mon.a on vm01 2026-03-10T06:42:17.145 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:16 vm08 ceph-mon[52886]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T06:42:17.145 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:16 vm08 ceph-mon[52886]: from='client.? 192.168.123.109:0/335334073' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T06:42:17.145 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:16 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:17.145 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:16 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:17.145 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:16 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T06:42:17.145 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:16 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T06:42:17.145 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:16 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:17.145 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:16 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:17.145 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:16 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:17.145 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:16 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T06:42:17.145 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:16 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T06:42:17.145 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:16 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:17.145 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:16 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T06:42:17.145 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:16 vm08 ceph-mon[52886]: from='client.? 192.168.123.101:0/2051112785' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:17.145 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:16 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:17.145 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:16 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:17.145 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:16 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:42:17.145 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:16 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:17.145 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:16 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T06:42:17.145 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:16 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:17.336 INFO:teuthology.orchestra.run.vm09.stdout:Scheduled mgr update... 2026-03-10T06:42:17.386 DEBUG:teuthology.orchestra.run.vm08:mgr.b> sudo journalctl -f -n 0 -u ceph-0204d884-1c4c-11f1-a017-9957fb527c0e@mgr.b.service 2026-03-10T06:42:17.387 INFO:tasks.cephadm:Deploying OSDs... 2026-03-10T06:42:17.388 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T06:42:17.388 DEBUG:teuthology.orchestra.run.vm01:> dd if=/scratch_devs of=/dev/stdout 2026-03-10T06:42:17.403 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T06:42:17.403 DEBUG:teuthology.orchestra.run.vm01:> ls /dev/[sv]d? 2026-03-10T06:42:17.458 INFO:teuthology.orchestra.run.vm01.stdout:/dev/vda 2026-03-10T06:42:17.458 INFO:teuthology.orchestra.run.vm01.stdout:/dev/vdb 2026-03-10T06:42:17.458 INFO:teuthology.orchestra.run.vm01.stdout:/dev/vdc 2026-03-10T06:42:17.458 INFO:teuthology.orchestra.run.vm01.stdout:/dev/vdd 2026-03-10T06:42:17.458 INFO:teuthology.orchestra.run.vm01.stdout:/dev/vde 2026-03-10T06:42:17.458 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-10T06:42:17.458 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-10T06:42:17.458 DEBUG:teuthology.orchestra.run.vm01:> stat /dev/vdb 2026-03-10T06:42:17.514 INFO:teuthology.orchestra.run.vm01.stdout: File: /dev/vdb 2026-03-10T06:42:17.514 INFO:teuthology.orchestra.run.vm01.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T06:42:17.514 INFO:teuthology.orchestra.run.vm01.stdout:Device: 6h/6d Inode: 254 Links: 1 Device type: fc,10 2026-03-10T06:42:17.514 INFO:teuthology.orchestra.run.vm01.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T06:42:17.514 INFO:teuthology.orchestra.run.vm01.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T06:42:17.514 INFO:teuthology.orchestra.run.vm01.stdout:Access: 2026-03-10 06:41:55.051904671 +0000 2026-03-10T06:42:17.514 INFO:teuthology.orchestra.run.vm01.stdout:Modify: 2026-03-10 06:38:35.118442985 +0000 2026-03-10T06:42:17.514 INFO:teuthology.orchestra.run.vm01.stdout:Change: 2026-03-10 06:38:35.118442985 +0000 2026-03-10T06:42:17.514 INFO:teuthology.orchestra.run.vm01.stdout: Birth: 2026-03-10 06:35:09.205000000 +0000 2026-03-10T06:42:17.515 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-10T06:42:17.576 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records in 2026-03-10T06:42:17.576 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records out 2026-03-10T06:42:17.576 INFO:teuthology.orchestra.run.vm01.stderr:512 bytes copied, 0.00010704 s, 4.8 MB/s 2026-03-10T06:42:17.577 DEBUG:teuthology.orchestra.run.vm01:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-10T06:42:17.634 DEBUG:teuthology.orchestra.run.vm01:> stat /dev/vdc 2026-03-10T06:42:17.691 INFO:teuthology.orchestra.run.vm01.stdout: File: /dev/vdc 2026-03-10T06:42:17.691 INFO:teuthology.orchestra.run.vm01.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T06:42:17.691 INFO:teuthology.orchestra.run.vm01.stdout:Device: 6h/6d Inode: 255 Links: 1 Device type: fc,20 2026-03-10T06:42:17.691 INFO:teuthology.orchestra.run.vm01.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T06:42:17.691 INFO:teuthology.orchestra.run.vm01.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T06:42:17.691 INFO:teuthology.orchestra.run.vm01.stdout:Access: 2026-03-10 06:41:55.122904746 +0000 2026-03-10T06:42:17.691 INFO:teuthology.orchestra.run.vm01.stdout:Modify: 2026-03-10 06:38:35.085442953 +0000 2026-03-10T06:42:17.691 INFO:teuthology.orchestra.run.vm01.stdout:Change: 2026-03-10 06:38:35.085442953 +0000 2026-03-10T06:42:17.691 INFO:teuthology.orchestra.run.vm01.stdout: Birth: 2026-03-10 06:35:09.209000000 +0000 2026-03-10T06:42:17.691 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-10T06:42:17.700 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 06:42:17 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-a[50735]: 2026-03-10T06:42:17.420+0000 7fde750cf640 -1 mgr.server handle_report got status from non-daemon mon.b 2026-03-10T06:42:17.700 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:17 vm01 ceph-mon[50525]: Reconfiguring mon.b (monmap changed)... 2026-03-10T06:42:17.700 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:17 vm01 ceph-mon[50525]: Reconfiguring daemon mon.b on vm08 2026-03-10T06:42:17.700 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:17 vm01 ceph-mon[50525]: Reconfiguring mon.c (monmap changed)... 2026-03-10T06:42:17.700 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:17 vm01 ceph-mon[50525]: Reconfiguring daemon mon.c on vm09 2026-03-10T06:42:17.700 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:17 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:17.700 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:17 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:42:17.700 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:17 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:17.700 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:17 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T06:42:17.700 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:17 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:17.700 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:17 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T06:42:17.700 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:17 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-10T06:42:17.700 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:17 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T06:42:17.700 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:17 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:17.722 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records in 2026-03-10T06:42:17.722 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records out 2026-03-10T06:42:17.722 INFO:teuthology.orchestra.run.vm01.stderr:512 bytes copied, 0.000177331 s, 2.9 MB/s 2026-03-10T06:42:17.723 DEBUG:teuthology.orchestra.run.vm01:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-10T06:42:17.781 DEBUG:teuthology.orchestra.run.vm01:> stat /dev/vdd 2026-03-10T06:42:17.838 INFO:teuthology.orchestra.run.vm01.stdout: File: /dev/vdd 2026-03-10T06:42:17.838 INFO:teuthology.orchestra.run.vm01.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T06:42:17.838 INFO:teuthology.orchestra.run.vm01.stdout:Device: 6h/6d Inode: 256 Links: 1 Device type: fc,30 2026-03-10T06:42:17.838 INFO:teuthology.orchestra.run.vm01.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T06:42:17.838 INFO:teuthology.orchestra.run.vm01.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T06:42:17.838 INFO:teuthology.orchestra.run.vm01.stdout:Access: 2026-03-10 06:41:55.151904776 +0000 2026-03-10T06:42:17.838 INFO:teuthology.orchestra.run.vm01.stdout:Modify: 2026-03-10 06:38:35.102442970 +0000 2026-03-10T06:42:17.838 INFO:teuthology.orchestra.run.vm01.stdout:Change: 2026-03-10 06:38:35.102442970 +0000 2026-03-10T06:42:17.838 INFO:teuthology.orchestra.run.vm01.stdout: Birth: 2026-03-10 06:35:09.214000000 +0000 2026-03-10T06:42:17.838 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-10T06:42:17.900 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records in 2026-03-10T06:42:17.900 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records out 2026-03-10T06:42:17.900 INFO:teuthology.orchestra.run.vm01.stderr:512 bytes copied, 0.000117219 s, 4.4 MB/s 2026-03-10T06:42:17.901 DEBUG:teuthology.orchestra.run.vm01:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-10T06:42:17.957 DEBUG:teuthology.orchestra.run.vm01:> stat /dev/vde 2026-03-10T06:42:18.017 INFO:teuthology.orchestra.run.vm01.stdout: File: /dev/vde 2026-03-10T06:42:18.017 INFO:teuthology.orchestra.run.vm01.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T06:42:18.017 INFO:teuthology.orchestra.run.vm01.stdout:Device: 6h/6d Inode: 257 Links: 1 Device type: fc,40 2026-03-10T06:42:18.017 INFO:teuthology.orchestra.run.vm01.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T06:42:18.017 INFO:teuthology.orchestra.run.vm01.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T06:42:18.017 INFO:teuthology.orchestra.run.vm01.stdout:Access: 2026-03-10 06:41:55.179904806 +0000 2026-03-10T06:42:18.017 INFO:teuthology.orchestra.run.vm01.stdout:Modify: 2026-03-10 06:38:35.092442960 +0000 2026-03-10T06:42:18.017 INFO:teuthology.orchestra.run.vm01.stdout:Change: 2026-03-10 06:38:35.092442960 +0000 2026-03-10T06:42:18.017 INFO:teuthology.orchestra.run.vm01.stdout: Birth: 2026-03-10 06:35:09.245000000 +0000 2026-03-10T06:42:18.017 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-10T06:42:18.024 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:17 vm08 ceph-mon[52886]: Reconfiguring mon.b (monmap changed)... 2026-03-10T06:42:18.024 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:17 vm08 ceph-mon[52886]: Reconfiguring daemon mon.b on vm08 2026-03-10T06:42:18.024 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:17 vm08 ceph-mon[52886]: Reconfiguring mon.c (monmap changed)... 2026-03-10T06:42:18.024 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:17 vm08 ceph-mon[52886]: Reconfiguring daemon mon.c on vm09 2026-03-10T06:42:18.024 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:17 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:18.024 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:17 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:42:18.024 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:17 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:18.025 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:17 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T06:42:18.025 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:17 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:18.025 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:17 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T06:42:18.025 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:17 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-10T06:42:18.025 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:17 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T06:42:18.025 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:17 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:18.025 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 06:42:17 vm08 systemd[1]: Starting Ceph mgr.b for 0204d884-1c4c-11f1-a017-9957fb527c0e... 2026-03-10T06:42:18.084 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records in 2026-03-10T06:42:18.085 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records out 2026-03-10T06:42:18.085 INFO:teuthology.orchestra.run.vm01.stderr:512 bytes copied, 0.000128692 s, 4.0 MB/s 2026-03-10T06:42:18.086 DEBUG:teuthology.orchestra.run.vm01:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-10T06:42:18.106 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:17 vm09 ceph-mon[55409]: Reconfiguring mon.b (monmap changed)... 2026-03-10T06:42:18.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:17 vm09 ceph-mon[55409]: Reconfiguring daemon mon.b on vm08 2026-03-10T06:42:18.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:17 vm09 ceph-mon[55409]: Reconfiguring mon.c (monmap changed)... 2026-03-10T06:42:18.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:17 vm09 ceph-mon[55409]: Reconfiguring daemon mon.c on vm09 2026-03-10T06:42:18.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:17 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:18.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:17 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:42:18.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:17 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:18.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:17 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T06:42:18.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:17 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:18.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:17 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T06:42:18.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:17 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-10T06:42:18.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:17 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T06:42:18.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:17 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:18.150 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T06:42:18.150 DEBUG:teuthology.orchestra.run.vm08:> dd if=/scratch_devs of=/dev/stdout 2026-03-10T06:42:18.167 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T06:42:18.167 DEBUG:teuthology.orchestra.run.vm08:> ls /dev/[sv]d? 2026-03-10T06:42:18.281 INFO:teuthology.orchestra.run.vm08.stdout:/dev/vda 2026-03-10T06:42:18.281 INFO:teuthology.orchestra.run.vm08.stdout:/dev/vdb 2026-03-10T06:42:18.281 INFO:teuthology.orchestra.run.vm08.stdout:/dev/vdc 2026-03-10T06:42:18.281 INFO:teuthology.orchestra.run.vm08.stdout:/dev/vdd 2026-03-10T06:42:18.281 INFO:teuthology.orchestra.run.vm08.stdout:/dev/vde 2026-03-10T06:42:18.281 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-10T06:42:18.282 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-10T06:42:18.282 DEBUG:teuthology.orchestra.run.vm08:> stat /dev/vdb 2026-03-10T06:42:18.389 INFO:teuthology.orchestra.run.vm08.stdout: File: /dev/vdb 2026-03-10T06:42:18.389 INFO:teuthology.orchestra.run.vm08.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T06:42:18.389 INFO:teuthology.orchestra.run.vm08.stdout:Device: 6h/6d Inode: 254 Links: 1 Device type: fc,10 2026-03-10T06:42:18.389 INFO:teuthology.orchestra.run.vm08.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T06:42:18.389 INFO:teuthology.orchestra.run.vm08.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T06:42:18.389 INFO:teuthology.orchestra.run.vm08.stdout:Access: 2026-03-10 06:42:00.466821575 +0000 2026-03-10T06:42:18.389 INFO:teuthology.orchestra.run.vm08.stdout:Modify: 2026-03-10 06:38:34.595774738 +0000 2026-03-10T06:42:18.389 INFO:teuthology.orchestra.run.vm08.stdout:Change: 2026-03-10 06:38:34.595774738 +0000 2026-03-10T06:42:18.389 INFO:teuthology.orchestra.run.vm08.stdout: Birth: 2026-03-10 06:35:40.211000000 +0000 2026-03-10T06:42:18.389 DEBUG:teuthology.orchestra.run.vm08:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-10T06:42:18.394 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 06:42:18 vm08 podman[53806]: 2026-03-10 06:42:18.024735236 +0000 UTC m=+0.016257708 container create e75496133b4d674c1f4d31a27f50d41733dfed8c1ca4caa5dde0893e7bf0570f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-b, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T06:42:18.394 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 06:42:18 vm08 podman[53806]: 2026-03-10 06:42:18.059149363 +0000 UTC m=+0.050671825 container init e75496133b4d674c1f4d31a27f50d41733dfed8c1ca4caa5dde0893e7bf0570f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-b, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) 2026-03-10T06:42:18.394 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 06:42:18 vm08 podman[53806]: 2026-03-10 06:42:18.061627111 +0000 UTC m=+0.053149583 container start e75496133b4d674c1f4d31a27f50d41733dfed8c1ca4caa5dde0893e7bf0570f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-b, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-10T06:42:18.394 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 06:42:18 vm08 bash[53806]: e75496133b4d674c1f4d31a27f50d41733dfed8c1ca4caa5dde0893e7bf0570f 2026-03-10T06:42:18.394 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 06:42:18 vm08 podman[53806]: 2026-03-10 06:42:18.018100298 +0000 UTC m=+0.009622781 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T06:42:18.394 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 06:42:18 vm08 systemd[1]: Started Ceph mgr.b for 0204d884-1c4c-11f1-a017-9957fb527c0e. 2026-03-10T06:42:18.394 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 06:42:18 vm08 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-b[53816]: 2026-03-10T06:42:18.164+0000 7f8e2fd72140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T06:42:18.394 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 06:42:18 vm08 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-b[53816]: 2026-03-10T06:42:18.210+0000 7f8e2fd72140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T06:42:18.454 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records in 2026-03-10T06:42:18.454 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records out 2026-03-10T06:42:18.454 INFO:teuthology.orchestra.run.vm08.stderr:512 bytes copied, 9.7643e-05 s, 5.2 MB/s 2026-03-10T06:42:18.455 DEBUG:teuthology.orchestra.run.vm08:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-10T06:42:18.474 DEBUG:teuthology.orchestra.run.vm08:> stat /dev/vdc 2026-03-10T06:42:18.548 INFO:teuthology.orchestra.run.vm08.stdout: File: /dev/vdc 2026-03-10T06:42:18.548 INFO:teuthology.orchestra.run.vm08.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T06:42:18.548 INFO:teuthology.orchestra.run.vm08.stdout:Device: 6h/6d Inode: 255 Links: 1 Device type: fc,20 2026-03-10T06:42:18.548 INFO:teuthology.orchestra.run.vm08.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T06:42:18.548 INFO:teuthology.orchestra.run.vm08.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T06:42:18.548 INFO:teuthology.orchestra.run.vm08.stdout:Access: 2026-03-10 06:42:00.490821595 +0000 2026-03-10T06:42:18.548 INFO:teuthology.orchestra.run.vm08.stdout:Modify: 2026-03-10 06:38:34.602774745 +0000 2026-03-10T06:42:18.548 INFO:teuthology.orchestra.run.vm08.stdout:Change: 2026-03-10 06:38:34.602774745 +0000 2026-03-10T06:42:18.548 INFO:teuthology.orchestra.run.vm08.stdout: Birth: 2026-03-10 06:35:40.215000000 +0000 2026-03-10T06:42:18.549 DEBUG:teuthology.orchestra.run.vm08:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-10T06:42:18.632 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records in 2026-03-10T06:42:18.632 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records out 2026-03-10T06:42:18.636 INFO:teuthology.orchestra.run.vm08.stderr:512 bytes copied, 0.00402548 s, 127 kB/s 2026-03-10T06:42:18.637 DEBUG:teuthology.orchestra.run.vm08:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-10T06:42:18.661 DEBUG:teuthology.orchestra.run.vm08:> stat /dev/vdd 2026-03-10T06:42:18.718 INFO:teuthology.orchestra.run.vm08.stdout: File: /dev/vdd 2026-03-10T06:42:18.719 INFO:teuthology.orchestra.run.vm08.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T06:42:18.719 INFO:teuthology.orchestra.run.vm08.stdout:Device: 6h/6d Inode: 256 Links: 1 Device type: fc,30 2026-03-10T06:42:18.719 INFO:teuthology.orchestra.run.vm08.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T06:42:18.719 INFO:teuthology.orchestra.run.vm08.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T06:42:18.719 INFO:teuthology.orchestra.run.vm08.stdout:Access: 2026-03-10 06:42:00.514821614 +0000 2026-03-10T06:42:18.719 INFO:teuthology.orchestra.run.vm08.stdout:Modify: 2026-03-10 06:38:34.617774760 +0000 2026-03-10T06:42:18.719 INFO:teuthology.orchestra.run.vm08.stdout:Change: 2026-03-10 06:38:34.617774760 +0000 2026-03-10T06:42:18.719 INFO:teuthology.orchestra.run.vm08.stdout: Birth: 2026-03-10 06:35:40.218000000 +0000 2026-03-10T06:42:18.719 DEBUG:teuthology.orchestra.run.vm08:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-10T06:42:18.781 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 06:42:18 vm08 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-b[53816]: 2026-03-10T06:42:18.682+0000 7f8e2fd72140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T06:42:18.783 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records in 2026-03-10T06:42:18.783 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records out 2026-03-10T06:42:18.783 INFO:teuthology.orchestra.run.vm08.stderr:512 bytes copied, 0.000111929 s, 4.6 MB/s 2026-03-10T06:42:18.784 DEBUG:teuthology.orchestra.run.vm08:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-10T06:42:18.841 DEBUG:teuthology.orchestra.run.vm08:> stat /dev/vde 2026-03-10T06:42:18.897 INFO:teuthology.orchestra.run.vm08.stdout: File: /dev/vde 2026-03-10T06:42:18.898 INFO:teuthology.orchestra.run.vm08.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T06:42:18.898 INFO:teuthology.orchestra.run.vm08.stdout:Device: 6h/6d Inode: 257 Links: 1 Device type: fc,40 2026-03-10T06:42:18.898 INFO:teuthology.orchestra.run.vm08.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T06:42:18.898 INFO:teuthology.orchestra.run.vm08.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T06:42:18.898 INFO:teuthology.orchestra.run.vm08.stdout:Access: 2026-03-10 06:42:00.538821634 +0000 2026-03-10T06:42:18.898 INFO:teuthology.orchestra.run.vm08.stdout:Modify: 2026-03-10 06:38:34.677774822 +0000 2026-03-10T06:42:18.898 INFO:teuthology.orchestra.run.vm08.stdout:Change: 2026-03-10 06:38:34.677774822 +0000 2026-03-10T06:42:18.898 INFO:teuthology.orchestra.run.vm08.stdout: Birth: 2026-03-10 06:35:40.221000000 +0000 2026-03-10T06:42:18.898 DEBUG:teuthology.orchestra.run.vm08:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-10T06:42:18.979 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records in 2026-03-10T06:42:18.979 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records out 2026-03-10T06:42:18.979 INFO:teuthology.orchestra.run.vm08.stderr:512 bytes copied, 0.00017562 s, 2.9 MB/s 2026-03-10T06:42:18.980 DEBUG:teuthology.orchestra.run.vm08:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-10T06:42:19.050 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T06:42:19.050 DEBUG:teuthology.orchestra.run.vm09:> dd if=/scratch_devs of=/dev/stdout 2026-03-10T06:42:19.065 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T06:42:19.065 DEBUG:teuthology.orchestra.run.vm09:> ls /dev/[sv]d? 2026-03-10T06:42:19.077 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:19 vm08 ceph-mon[52886]: from='client.24106 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm01=a;vm08=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T06:42:19.077 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:19 vm08 ceph-mon[52886]: Saving service mgr spec with placement vm01=a;vm08=b;count:2 2026-03-10T06:42:19.077 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:19 vm08 ceph-mon[52886]: Deploying daemon mgr.b on vm08 2026-03-10T06:42:19.077 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:19 vm08 ceph-mon[52886]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T06:42:19.077 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:19 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:19.077 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:19 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:19.077 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:19 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:19.077 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:19 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:19.077 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:19 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:42:19.077 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 06:42:18 vm08 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-b[53816]: 2026-03-10T06:42:18.988+0000 7f8e2fd72140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T06:42:19.120 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vda 2026-03-10T06:42:19.120 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vdb 2026-03-10T06:42:19.120 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vdc 2026-03-10T06:42:19.120 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vdd 2026-03-10T06:42:19.120 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vde 2026-03-10T06:42:19.120 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-10T06:42:19.120 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-10T06:42:19.120 DEBUG:teuthology.orchestra.run.vm09:> stat /dev/vdb 2026-03-10T06:42:19.177 INFO:teuthology.orchestra.run.vm09.stdout: File: /dev/vdb 2026-03-10T06:42:19.177 INFO:teuthology.orchestra.run.vm09.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T06:42:19.177 INFO:teuthology.orchestra.run.vm09.stdout:Device: 6h/6d Inode: 241 Links: 1 Device type: fc,10 2026-03-10T06:42:19.177 INFO:teuthology.orchestra.run.vm09.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T06:42:19.177 INFO:teuthology.orchestra.run.vm09.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T06:42:19.177 INFO:teuthology.orchestra.run.vm09.stdout:Access: 2026-03-10 06:42:02.944010262 +0000 2026-03-10T06:42:19.177 INFO:teuthology.orchestra.run.vm09.stdout:Modify: 2026-03-10 06:38:34.904956870 +0000 2026-03-10T06:42:19.177 INFO:teuthology.orchestra.run.vm09.stdout:Change: 2026-03-10 06:38:34.904956870 +0000 2026-03-10T06:42:19.177 INFO:teuthology.orchestra.run.vm09.stdout: Birth: 2026-03-10 06:36:05.272000000 +0000 2026-03-10T06:42:19.177 DEBUG:teuthology.orchestra.run.vm09:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-10T06:42:19.241 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records in 2026-03-10T06:42:19.241 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records out 2026-03-10T06:42:19.241 INFO:teuthology.orchestra.run.vm09.stderr:512 bytes copied, 0.000154058 s, 3.3 MB/s 2026-03-10T06:42:19.242 DEBUG:teuthology.orchestra.run.vm09:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-10T06:42:19.287 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:19 vm01 ceph-mon[50525]: from='client.24106 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm01=a;vm08=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T06:42:19.287 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:19 vm01 ceph-mon[50525]: Saving service mgr spec with placement vm01=a;vm08=b;count:2 2026-03-10T06:42:19.287 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:19 vm01 ceph-mon[50525]: Deploying daemon mgr.b on vm08 2026-03-10T06:42:19.287 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:19 vm01 ceph-mon[50525]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T06:42:19.287 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:19 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:19.287 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:19 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:19.287 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:19 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:19.287 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:19 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:19.287 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:19 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:42:19.298 DEBUG:teuthology.orchestra.run.vm09:> stat /dev/vdc 2026-03-10T06:42:19.354 INFO:teuthology.orchestra.run.vm09.stdout: File: /dev/vdc 2026-03-10T06:42:19.354 INFO:teuthology.orchestra.run.vm09.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T06:42:19.354 INFO:teuthology.orchestra.run.vm09.stdout:Device: 6h/6d Inode: 252 Links: 1 Device type: fc,20 2026-03-10T06:42:19.354 INFO:teuthology.orchestra.run.vm09.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T06:42:19.354 INFO:teuthology.orchestra.run.vm09.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T06:42:19.354 INFO:teuthology.orchestra.run.vm09.stdout:Access: 2026-03-10 06:42:02.973010289 +0000 2026-03-10T06:42:19.354 INFO:teuthology.orchestra.run.vm09.stdout:Modify: 2026-03-10 06:38:34.929956890 +0000 2026-03-10T06:42:19.354 INFO:teuthology.orchestra.run.vm09.stdout:Change: 2026-03-10 06:38:34.929956890 +0000 2026-03-10T06:42:19.354 INFO:teuthology.orchestra.run.vm09.stdout: Birth: 2026-03-10 06:36:05.276000000 +0000 2026-03-10T06:42:19.354 DEBUG:teuthology.orchestra.run.vm09:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-10T06:42:19.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:19 vm09 ceph-mon[55409]: from='client.24106 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm01=a;vm08=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T06:42:19.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:19 vm09 ceph-mon[55409]: Saving service mgr spec with placement vm01=a;vm08=b;count:2 2026-03-10T06:42:19.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:19 vm09 ceph-mon[55409]: Deploying daemon mgr.b on vm08 2026-03-10T06:42:19.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:19 vm09 ceph-mon[55409]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T06:42:19.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:19 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:19.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:19 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:19.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:19 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:19.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:19 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:19.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:19 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:42:19.378 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records in 2026-03-10T06:42:19.378 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records out 2026-03-10T06:42:19.378 INFO:teuthology.orchestra.run.vm09.stderr:512 bytes copied, 0.000119383 s, 4.3 MB/s 2026-03-10T06:42:19.380 DEBUG:teuthology.orchestra.run.vm09:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-10T06:42:19.394 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 06:42:19 vm08 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-b[53816]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T06:42:19.394 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 06:42:19 vm08 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-b[53816]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T06:42:19.394 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 06:42:19 vm08 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-b[53816]: from numpy import show_config as show_numpy_config 2026-03-10T06:42:19.394 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 06:42:19 vm08 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-b[53816]: 2026-03-10T06:42:19.082+0000 7f8e2fd72140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T06:42:19.394 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 06:42:19 vm08 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-b[53816]: 2026-03-10T06:42:19.117+0000 7f8e2fd72140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T06:42:19.394 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 06:42:19 vm08 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-b[53816]: 2026-03-10T06:42:19.184+0000 7f8e2fd72140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T06:42:19.439 DEBUG:teuthology.orchestra.run.vm09:> stat /dev/vdd 2026-03-10T06:42:19.495 INFO:teuthology.orchestra.run.vm09.stdout: File: /dev/vdd 2026-03-10T06:42:19.495 INFO:teuthology.orchestra.run.vm09.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T06:42:19.496 INFO:teuthology.orchestra.run.vm09.stdout:Device: 6h/6d Inode: 256 Links: 1 Device type: fc,30 2026-03-10T06:42:19.496 INFO:teuthology.orchestra.run.vm09.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T06:42:19.496 INFO:teuthology.orchestra.run.vm09.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T06:42:19.496 INFO:teuthology.orchestra.run.vm09.stdout:Access: 2026-03-10 06:42:03.008010322 +0000 2026-03-10T06:42:19.496 INFO:teuthology.orchestra.run.vm09.stdout:Modify: 2026-03-10 06:38:34.930956890 +0000 2026-03-10T06:42:19.496 INFO:teuthology.orchestra.run.vm09.stdout:Change: 2026-03-10 06:38:34.930956890 +0000 2026-03-10T06:42:19.496 INFO:teuthology.orchestra.run.vm09.stdout: Birth: 2026-03-10 06:36:05.281000000 +0000 2026-03-10T06:42:19.496 DEBUG:teuthology.orchestra.run.vm09:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-10T06:42:19.562 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records in 2026-03-10T06:42:19.562 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records out 2026-03-10T06:42:19.562 INFO:teuthology.orchestra.run.vm09.stderr:512 bytes copied, 0.000175648 s, 2.9 MB/s 2026-03-10T06:42:19.563 DEBUG:teuthology.orchestra.run.vm09:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-10T06:42:19.620 DEBUG:teuthology.orchestra.run.vm09:> stat /dev/vde 2026-03-10T06:42:19.679 INFO:teuthology.orchestra.run.vm09.stdout: File: /dev/vde 2026-03-10T06:42:19.679 INFO:teuthology.orchestra.run.vm09.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T06:42:19.679 INFO:teuthology.orchestra.run.vm09.stdout:Device: 6h/6d Inode: 257 Links: 1 Device type: fc,40 2026-03-10T06:42:19.679 INFO:teuthology.orchestra.run.vm09.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T06:42:19.679 INFO:teuthology.orchestra.run.vm09.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T06:42:19.679 INFO:teuthology.orchestra.run.vm09.stdout:Access: 2026-03-10 06:42:03.052010363 +0000 2026-03-10T06:42:19.679 INFO:teuthology.orchestra.run.vm09.stdout:Modify: 2026-03-10 06:38:34.943956901 +0000 2026-03-10T06:42:19.679 INFO:teuthology.orchestra.run.vm09.stdout:Change: 2026-03-10 06:38:34.943956901 +0000 2026-03-10T06:42:19.679 INFO:teuthology.orchestra.run.vm09.stdout: Birth: 2026-03-10 06:36:05.284000000 +0000 2026-03-10T06:42:19.679 DEBUG:teuthology.orchestra.run.vm09:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-10T06:42:19.742 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records in 2026-03-10T06:42:19.742 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records out 2026-03-10T06:42:19.742 INFO:teuthology.orchestra.run.vm09.stderr:512 bytes copied, 0.000174706 s, 2.9 MB/s 2026-03-10T06:42:19.743 DEBUG:teuthology.orchestra.run.vm09:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-10T06:42:19.800 INFO:tasks.cephadm:Deploying osd.0 on vm01 with /dev/vde... 2026-03-10T06:42:19.800 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e -- lvm zap /dev/vde 2026-03-10T06:42:19.936 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 06:42:19 vm08 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-b[53816]: 2026-03-10T06:42:19.683+0000 7f8e2fd72140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T06:42:19.936 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 06:42:19 vm08 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-b[53816]: 2026-03-10T06:42:19.786+0000 7f8e2fd72140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T06:42:19.936 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 06:42:19 vm08 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-b[53816]: 2026-03-10T06:42:19.823+0000 7f8e2fd72140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T06:42:19.936 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 06:42:19 vm08 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-b[53816]: 2026-03-10T06:42:19.856+0000 7f8e2fd72140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T06:42:19.936 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 06:42:19 vm08 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-b[53816]: 2026-03-10T06:42:19.896+0000 7f8e2fd72140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T06:42:19.958 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/mon.a/config 2026-03-10T06:42:20.034 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:20 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:20.035 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:20 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:20.035 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:20 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:20.087 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:20 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:20.087 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:20 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:20.087 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:20 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:20.087 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:20 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T06:42:20.087 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:20 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:20.087 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:20 vm01 ceph-mon[50525]: Reconfiguring mgr.a (unknown last config time)... 2026-03-10T06:42:20.087 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:20 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.a", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T06:42:20.087 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:20 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T06:42:20.087 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:20 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:20.087 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:20 vm01 ceph-mon[50525]: Reconfiguring daemon mgr.a on vm01 2026-03-10T06:42:20.087 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:20 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:20.087 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:20 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:20.087 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:20 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:42:20.088 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:20 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:20.088 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:20 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T06:42:20.088 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:20 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:20.088 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:20 vm01 ceph-mon[50525]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T06:42:20.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:20 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T06:42:20.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:20 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:20.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:20 vm09 ceph-mon[55409]: Reconfiguring mgr.a (unknown last config time)... 2026-03-10T06:42:20.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:20 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.a", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T06:42:20.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:20 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T06:42:20.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:20 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:20.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:20 vm09 ceph-mon[55409]: Reconfiguring daemon mgr.a on vm01 2026-03-10T06:42:20.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:20 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:20.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:20 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:20.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:20 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:42:20.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:20 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:20.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:20 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T06:42:20.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:20 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:20.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:20 vm09 ceph-mon[55409]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T06:42:20.378 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:20 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:20.378 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:20 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:20.378 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:20 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:20.378 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:20 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T06:42:20.378 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:20 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:20.378 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:20 vm08 ceph-mon[52886]: Reconfiguring mgr.a (unknown last config time)... 2026-03-10T06:42:20.378 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:20 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.a", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T06:42:20.378 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:20 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T06:42:20.378 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:20 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:20.378 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:20 vm08 ceph-mon[52886]: Reconfiguring daemon mgr.a on vm01 2026-03-10T06:42:20.378 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:20 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:20.378 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:20 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:20.378 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:20 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:42:20.378 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:20 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:20.378 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:20 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T06:42:20.378 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:20 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:20.378 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:20 vm08 ceph-mon[52886]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T06:42:20.378 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 06:42:19 vm08 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-b[53816]: 2026-03-10T06:42:19.935+0000 7f8e2fd72140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T06:42:20.378 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 06:42:20 vm08 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-b[53816]: 2026-03-10T06:42:20.101+0000 7f8e2fd72140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T06:42:20.378 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 06:42:20 vm08 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-b[53816]: 2026-03-10T06:42:20.152+0000 7f8e2fd72140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T06:42:20.644 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 06:42:20 vm08 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-b[53816]: 2026-03-10T06:42:20.376+0000 7f8e2fd72140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T06:42:20.727 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:42:20.742 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e -- ceph orch daemon add osd vm01:/dev/vde 2026-03-10T06:42:20.903 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/mon.a/config 2026-03-10T06:42:20.917 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 06:42:20 vm08 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-b[53816]: 2026-03-10T06:42:20.648+0000 7f8e2fd72140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T06:42:20.917 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 06:42:20 vm08 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-b[53816]: 2026-03-10T06:42:20.690+0000 7f8e2fd72140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T06:42:20.917 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 06:42:20 vm08 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-b[53816]: 2026-03-10T06:42:20.733+0000 7f8e2fd72140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T06:42:20.917 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 06:42:20 vm08 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-b[53816]: 2026-03-10T06:42:20.806+0000 7f8e2fd72140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T06:42:20.917 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 06:42:20 vm08 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-b[53816]: 2026-03-10T06:42:20.842+0000 7f8e2fd72140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T06:42:21.196 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 06:42:20 vm08 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-b[53816]: 2026-03-10T06:42:20.916+0000 7f8e2fd72140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T06:42:21.197 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 06:42:21 vm08 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-b[53816]: 2026-03-10T06:42:21.028+0000 7f8e2fd72140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T06:42:21.197 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 06:42:21 vm08 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-b[53816]: 2026-03-10T06:42:21.160+0000 7f8e2fd72140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T06:42:21.454 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 06:42:21 vm08 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mgr-b[53816]: 2026-03-10T06:42:21.195+0000 7f8e2fd72140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T06:42:21.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:21 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:21.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:21 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T06:42:21.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:21 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T06:42:21.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:21 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:21.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:21 vm01 ceph-mon[50525]: Standby manager daemon b started 2026-03-10T06:42:21.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:21 vm01 ceph-mon[50525]: from='mgr.? 192.168.123.108:0/4141971300' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/crt"}]: dispatch 2026-03-10T06:42:21.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:21 vm01 ceph-mon[50525]: from='mgr.? 192.168.123.108:0/4141971300' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T06:42:21.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:21 vm01 ceph-mon[50525]: from='mgr.? 192.168.123.108:0/4141971300' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/key"}]: dispatch 2026-03-10T06:42:21.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:21 vm01 ceph-mon[50525]: from='mgr.? 192.168.123.108:0/4141971300' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T06:42:21.857 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:21 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:21.857 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:21 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T06:42:21.857 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:21 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T06:42:21.857 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:21 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:21.857 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:21 vm09 ceph-mon[55409]: Standby manager daemon b started 2026-03-10T06:42:21.857 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:21 vm09 ceph-mon[55409]: from='mgr.? 192.168.123.108:0/4141971300' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/crt"}]: dispatch 2026-03-10T06:42:21.857 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:21 vm09 ceph-mon[55409]: from='mgr.? 192.168.123.108:0/4141971300' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T06:42:21.857 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:21 vm09 ceph-mon[55409]: from='mgr.? 192.168.123.108:0/4141971300' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/key"}]: dispatch 2026-03-10T06:42:21.857 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:21 vm09 ceph-mon[55409]: from='mgr.? 192.168.123.108:0/4141971300' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T06:42:21.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:21 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:21.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:21 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T06:42:21.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:21 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T06:42:21.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:21 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:21.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:21 vm08 ceph-mon[52886]: Standby manager daemon b started 2026-03-10T06:42:21.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:21 vm08 ceph-mon[52886]: from='mgr.? 192.168.123.108:0/4141971300' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/crt"}]: dispatch 2026-03-10T06:42:21.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:21 vm08 ceph-mon[52886]: from='mgr.? 192.168.123.108:0/4141971300' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T06:42:21.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:21 vm08 ceph-mon[52886]: from='mgr.? 192.168.123.108:0/4141971300' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/key"}]: dispatch 2026-03-10T06:42:21.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:21 vm08 ceph-mon[52886]: from='mgr.? 192.168.123.108:0/4141971300' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T06:42:22.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:22 vm01 ceph-mon[50525]: from='client.24118 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm01:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T06:42:22.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:22 vm01 ceph-mon[50525]: mgrmap e14: a(active, since 29s), standbys: b 2026-03-10T06:42:22.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:22 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "b", "id": "b"}]: dispatch 2026-03-10T06:42:22.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:22 vm01 ceph-mon[50525]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T06:42:22.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:22 vm01 ceph-mon[50525]: from='client.? 192.168.123.101:0/3723339826' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "63e03954-df21-4442-a789-b5f9f56cd792"}]: dispatch 2026-03-10T06:42:22.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:22 vm01 ceph-mon[50525]: from='client.? 192.168.123.101:0/3723339826' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "63e03954-df21-4442-a789-b5f9f56cd792"}]': finished 2026-03-10T06:42:22.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:22 vm01 ceph-mon[50525]: osdmap e5: 1 total, 0 up, 1 in 2026-03-10T06:42:22.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:22 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T06:42:22.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:22 vm01 ceph-mon[50525]: from='client.? 192.168.123.101:0/1925099565' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T06:42:22.857 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:22 vm09 ceph-mon[55409]: from='client.24118 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm01:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T06:42:22.857 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:22 vm09 ceph-mon[55409]: mgrmap e14: a(active, since 29s), standbys: b 2026-03-10T06:42:22.857 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:22 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "b", "id": "b"}]: dispatch 2026-03-10T06:42:22.857 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:22 vm09 ceph-mon[55409]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T06:42:22.857 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:22 vm09 ceph-mon[55409]: from='client.? 192.168.123.101:0/3723339826' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "63e03954-df21-4442-a789-b5f9f56cd792"}]: dispatch 2026-03-10T06:42:22.857 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:22 vm09 ceph-mon[55409]: from='client.? 192.168.123.101:0/3723339826' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "63e03954-df21-4442-a789-b5f9f56cd792"}]': finished 2026-03-10T06:42:22.857 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:22 vm09 ceph-mon[55409]: osdmap e5: 1 total, 0 up, 1 in 2026-03-10T06:42:22.857 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:22 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T06:42:22.857 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:22 vm09 ceph-mon[55409]: from='client.? 192.168.123.101:0/1925099565' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T06:42:22.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:22 vm08 ceph-mon[52886]: from='client.24118 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm01:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T06:42:22.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:22 vm08 ceph-mon[52886]: mgrmap e14: a(active, since 29s), standbys: b 2026-03-10T06:42:22.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:22 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "b", "id": "b"}]: dispatch 2026-03-10T06:42:22.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:22 vm08 ceph-mon[52886]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T06:42:22.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:22 vm08 ceph-mon[52886]: from='client.? 192.168.123.101:0/3723339826' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "63e03954-df21-4442-a789-b5f9f56cd792"}]: dispatch 2026-03-10T06:42:22.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:22 vm08 ceph-mon[52886]: from='client.? 192.168.123.101:0/3723339826' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "63e03954-df21-4442-a789-b5f9f56cd792"}]': finished 2026-03-10T06:42:22.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:22 vm08 ceph-mon[52886]: osdmap e5: 1 total, 0 up, 1 in 2026-03-10T06:42:22.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:22 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T06:42:22.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:22 vm08 ceph-mon[52886]: from='client.? 192.168.123.101:0/1925099565' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T06:42:24.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:23 vm09 ceph-mon[55409]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T06:42:24.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:23 vm01 ceph-mon[50525]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T06:42:24.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:23 vm08 ceph-mon[52886]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T06:42:25.953 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:25 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T06:42:25.953 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:25 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:26.106 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:25 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T06:42:26.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:25 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:26.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:25 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T06:42:26.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:25 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:27.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:26 vm09 ceph-mon[55409]: Deploying daemon osd.0 on vm01 2026-03-10T06:42:27.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:26 vm09 ceph-mon[55409]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T06:42:27.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:26 vm01 ceph-mon[50525]: Deploying daemon osd.0 on vm01 2026-03-10T06:42:27.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:26 vm01 ceph-mon[50525]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T06:42:27.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:26 vm08 ceph-mon[52886]: Deploying daemon osd.0 on vm01 2026-03-10T06:42:27.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:26 vm08 ceph-mon[52886]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T06:42:27.814 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:27 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:42:27.814 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:27 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:27.814 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:27 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:28.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:27 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:42:28.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:27 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:28.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:27 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:28.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:27 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:42:28.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:27 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:28.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:27 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:28.640 INFO:teuthology.orchestra.run.vm01.stdout:Created osd(s) 0 on host 'vm01' 2026-03-10T06:42:28.695 DEBUG:teuthology.orchestra.run.vm01:osd.0> sudo journalctl -f -n 0 -u ceph-0204d884-1c4c-11f1-a017-9957fb527c0e@osd.0.service 2026-03-10T06:42:28.696 INFO:tasks.cephadm:Deploying osd.1 on vm08 with /dev/vde... 2026-03-10T06:42:28.696 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e -- lvm zap /dev/vde 2026-03-10T06:42:28.863 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/mon.b/config 2026-03-10T06:42:28.989 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:28 vm01 ceph-mon[50525]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T06:42:28.989 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:28 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:28.989 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:28 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:28.989 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:28 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:28.989 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:28 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T06:42:28.989 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:28 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:28.989 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:28 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:42:28.989 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:28 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:28.989 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:28 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:29.000 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:28 vm08 ceph-mon[52886]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T06:42:29.000 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:28 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:29.000 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:28 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:29.000 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:28 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:29.000 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:28 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T06:42:29.000 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:28 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:29.000 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:28 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:42:29.000 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:28 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:29.000 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:28 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:29.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:28 vm09 ceph-mon[55409]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T06:42:29.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:28 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:29.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:28 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:29.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:28 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:29.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:28 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T06:42:29.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:28 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:29.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:28 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:42:29.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:28 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:29.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:28 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:29.629 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:42:29.646 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e -- ceph orch daemon add osd vm08:/dev/vde 2026-03-10T06:42:29.803 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/mon.b/config 2026-03-10T06:42:29.823 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 06:42:29 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-osd-0[59922]: 2026-03-10T06:42:29.565+0000 7f74d32d6740 -1 osd.0 0 log_to_monitors true 2026-03-10T06:42:29.823 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:29 vm08 ceph-mon[52886]: from='osd.0 [v2:192.168.123.101:6802/2271571456,v1:192.168.123.101:6803/2271571456]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T06:42:29.823 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:29 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:30.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:29 vm09 ceph-mon[55409]: from='osd.0 [v2:192.168.123.101:6802/2271571456,v1:192.168.123.101:6803/2271571456]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T06:42:30.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:29 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:30.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:29 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:30.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:29 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-10T06:42:30.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:29 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:30.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:29 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T06:42:30.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:29 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:30.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:29 vm01 ceph-mon[50525]: from='osd.0 [v2:192.168.123.101:6802/2271571456,v1:192.168.123.101:6803/2271571456]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T06:42:30.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:29 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:30.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:29 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:30.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:29 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-10T06:42:30.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:29 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:30.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:29 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T06:42:30.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:29 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:30.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:29 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:30.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:29 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-10T06:42:30.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:29 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:30.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:29 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T06:42:30.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:29 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:31.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:30 vm09 ceph-mon[55409]: Detected new or changed devices on vm01 2026-03-10T06:42:31.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:30 vm09 ceph-mon[55409]: Adjusting osd_memory_target on vm01 to 257.0M 2026-03-10T06:42:31.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:30 vm09 ceph-mon[55409]: Unable to set osd_memory_target on vm01 to 269536460: error parsing value: Value '269536460' is below minimum 939524096 2026-03-10T06:42:31.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:30 vm09 ceph-mon[55409]: from='osd.0 [v2:192.168.123.101:6802/2271571456,v1:192.168.123.101:6803/2271571456]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T06:42:31.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:30 vm09 ceph-mon[55409]: osdmap e6: 1 total, 0 up, 1 in 2026-03-10T06:42:31.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:30 vm09 ceph-mon[55409]: from='osd.0 [v2:192.168.123.101:6802/2271571456,v1:192.168.123.101:6803/2271571456]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-10T06:42:31.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:30 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T06:42:31.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:30 vm09 ceph-mon[55409]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T06:42:31.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:30 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T06:42:31.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:30 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T06:42:31.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:30 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:31.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:30 vm09 ceph-mon[55409]: from='client.? 192.168.123.108:0/3700708231' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "48860b07-2f08-4996-aacf-d45d8f4e30c7"}]: dispatch 2026-03-10T06:42:31.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:30 vm09 ceph-mon[55409]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "48860b07-2f08-4996-aacf-d45d8f4e30c7"}]: dispatch 2026-03-10T06:42:31.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:30 vm09 ceph-mon[55409]: from='osd.0 [v2:192.168.123.101:6802/2271571456,v1:192.168.123.101:6803/2271571456]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm01", "root=default"]}]': finished 2026-03-10T06:42:31.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:30 vm09 ceph-mon[55409]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "48860b07-2f08-4996-aacf-d45d8f4e30c7"}]': finished 2026-03-10T06:42:31.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:30 vm09 ceph-mon[55409]: osdmap e7: 2 total, 0 up, 2 in 2026-03-10T06:42:31.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:30 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T06:42:31.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:30 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T06:42:31.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:30 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T06:42:31.109 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:30 vm08 ceph-mon[52886]: Detected new or changed devices on vm01 2026-03-10T06:42:31.109 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:30 vm08 ceph-mon[52886]: Adjusting osd_memory_target on vm01 to 257.0M 2026-03-10T06:42:31.109 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:30 vm08 ceph-mon[52886]: Unable to set osd_memory_target on vm01 to 269536460: error parsing value: Value '269536460' is below minimum 939524096 2026-03-10T06:42:31.109 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:30 vm08 ceph-mon[52886]: from='osd.0 [v2:192.168.123.101:6802/2271571456,v1:192.168.123.101:6803/2271571456]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T06:42:31.109 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:30 vm08 ceph-mon[52886]: osdmap e6: 1 total, 0 up, 1 in 2026-03-10T06:42:31.109 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:30 vm08 ceph-mon[52886]: from='osd.0 [v2:192.168.123.101:6802/2271571456,v1:192.168.123.101:6803/2271571456]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-10T06:42:31.109 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:30 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T06:42:31.109 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:30 vm08 ceph-mon[52886]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T06:42:31.109 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:30 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T06:42:31.109 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:30 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T06:42:31.109 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:30 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:31.110 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:30 vm08 ceph-mon[52886]: from='client.? 192.168.123.108:0/3700708231' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "48860b07-2f08-4996-aacf-d45d8f4e30c7"}]: dispatch 2026-03-10T06:42:31.110 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:30 vm08 ceph-mon[52886]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "48860b07-2f08-4996-aacf-d45d8f4e30c7"}]: dispatch 2026-03-10T06:42:31.110 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:30 vm08 ceph-mon[52886]: from='osd.0 [v2:192.168.123.101:6802/2271571456,v1:192.168.123.101:6803/2271571456]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm01", "root=default"]}]': finished 2026-03-10T06:42:31.110 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:30 vm08 ceph-mon[52886]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "48860b07-2f08-4996-aacf-d45d8f4e30c7"}]': finished 2026-03-10T06:42:31.110 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:30 vm08 ceph-mon[52886]: osdmap e7: 2 total, 0 up, 2 in 2026-03-10T06:42:31.110 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:30 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T06:42:31.110 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:30 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T06:42:31.110 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:30 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T06:42:31.110 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 06:42:30 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-osd-0[59922]: 2026-03-10T06:42:30.695+0000 7f74cf257640 -1 osd.0 0 waiting for initial osdmap 2026-03-10T06:42:31.111 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 06:42:30 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-osd-0[59922]: 2026-03-10T06:42:30.699+0000 7f74cb081640 -1 osd.0 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T06:42:31.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:30 vm01 ceph-mon[50525]: Detected new or changed devices on vm01 2026-03-10T06:42:31.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:30 vm01 ceph-mon[50525]: Adjusting osd_memory_target on vm01 to 257.0M 2026-03-10T06:42:31.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:30 vm01 ceph-mon[50525]: Unable to set osd_memory_target on vm01 to 269536460: error parsing value: Value '269536460' is below minimum 939524096 2026-03-10T06:42:31.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:30 vm01 ceph-mon[50525]: from='osd.0 [v2:192.168.123.101:6802/2271571456,v1:192.168.123.101:6803/2271571456]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T06:42:31.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:30 vm01 ceph-mon[50525]: osdmap e6: 1 total, 0 up, 1 in 2026-03-10T06:42:31.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:30 vm01 ceph-mon[50525]: from='osd.0 [v2:192.168.123.101:6802/2271571456,v1:192.168.123.101:6803/2271571456]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-10T06:42:31.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:30 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T06:42:31.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:30 vm01 ceph-mon[50525]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T06:42:31.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:30 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T06:42:31.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:30 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T06:42:31.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:30 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:31.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:30 vm01 ceph-mon[50525]: from='client.? 192.168.123.108:0/3700708231' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "48860b07-2f08-4996-aacf-d45d8f4e30c7"}]: dispatch 2026-03-10T06:42:31.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:30 vm01 ceph-mon[50525]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "48860b07-2f08-4996-aacf-d45d8f4e30c7"}]: dispatch 2026-03-10T06:42:31.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:30 vm01 ceph-mon[50525]: from='osd.0 [v2:192.168.123.101:6802/2271571456,v1:192.168.123.101:6803/2271571456]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm01", "root=default"]}]': finished 2026-03-10T06:42:31.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:30 vm01 ceph-mon[50525]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "48860b07-2f08-4996-aacf-d45d8f4e30c7"}]': finished 2026-03-10T06:42:31.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:30 vm01 ceph-mon[50525]: osdmap e7: 2 total, 0 up, 2 in 2026-03-10T06:42:31.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:30 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T06:42:31.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:30 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T06:42:31.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:30 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T06:42:32.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:31 vm09 ceph-mon[55409]: from='client.24143 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm08:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T06:42:32.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:31 vm09 ceph-mon[55409]: from='client.? 192.168.123.108:0/2845813059' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T06:42:32.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:31 vm09 ceph-mon[55409]: osd.0 [v2:192.168.123.101:6802/2271571456,v1:192.168.123.101:6803/2271571456] boot 2026-03-10T06:42:32.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:31 vm09 ceph-mon[55409]: osdmap e8: 2 total, 1 up, 2 in 2026-03-10T06:42:32.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:31 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T06:42:32.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:31 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T06:42:32.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:31 vm01 ceph-mon[50525]: from='client.24143 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm08:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T06:42:32.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:31 vm01 ceph-mon[50525]: from='client.? 192.168.123.108:0/2845813059' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T06:42:32.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:31 vm01 ceph-mon[50525]: osd.0 [v2:192.168.123.101:6802/2271571456,v1:192.168.123.101:6803/2271571456] boot 2026-03-10T06:42:32.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:31 vm01 ceph-mon[50525]: osdmap e8: 2 total, 1 up, 2 in 2026-03-10T06:42:32.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:31 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T06:42:32.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:31 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T06:42:32.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:31 vm08 ceph-mon[52886]: from='client.24143 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm08:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T06:42:32.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:31 vm08 ceph-mon[52886]: from='client.? 192.168.123.108:0/2845813059' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T06:42:32.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:31 vm08 ceph-mon[52886]: osd.0 [v2:192.168.123.101:6802/2271571456,v1:192.168.123.101:6803/2271571456] boot 2026-03-10T06:42:32.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:31 vm08 ceph-mon[52886]: osdmap e8: 2 total, 1 up, 2 in 2026-03-10T06:42:32.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:31 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T06:42:32.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:31 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T06:42:33.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:32 vm09 ceph-mon[55409]: purged_snaps scrub starts 2026-03-10T06:42:33.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:32 vm09 ceph-mon[55409]: purged_snaps scrub ok 2026-03-10T06:42:33.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:32 vm09 ceph-mon[55409]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T06:42:33.110 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:32 vm01 ceph-mon[50525]: purged_snaps scrub starts 2026-03-10T06:42:33.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:32 vm01 ceph-mon[50525]: purged_snaps scrub ok 2026-03-10T06:42:33.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:32 vm01 ceph-mon[50525]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T06:42:33.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:32 vm08 ceph-mon[52886]: purged_snaps scrub starts 2026-03-10T06:42:33.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:32 vm08 ceph-mon[52886]: purged_snaps scrub ok 2026-03-10T06:42:33.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:32 vm08 ceph-mon[52886]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T06:42:33.849 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:33 vm08 ceph-mon[52886]: osdmap e9: 2 total, 1 up, 2 in 2026-03-10T06:42:33.850 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:33 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T06:42:34.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:33 vm09 ceph-mon[55409]: osdmap e9: 2 total, 1 up, 2 in 2026-03-10T06:42:34.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:33 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T06:42:34.110 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:33 vm01 ceph-mon[50525]: osdmap e9: 2 total, 1 up, 2 in 2026-03-10T06:42:34.110 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:33 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T06:42:34.920 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:34 vm08 ceph-mon[52886]: pgmap v20: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T06:42:34.920 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:34 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T06:42:34.920 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:34 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:35.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:34 vm09 ceph-mon[55409]: pgmap v20: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T06:42:35.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:34 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T06:42:35.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:34 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:35.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:34 vm01 ceph-mon[50525]: pgmap v20: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T06:42:35.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:34 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T06:42:35.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:34 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:36.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:35 vm09 ceph-mon[55409]: Deploying daemon osd.1 on vm08 2026-03-10T06:42:36.110 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:35 vm01 ceph-mon[50525]: Deploying daemon osd.1 on vm08 2026-03-10T06:42:36.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:35 vm08 ceph-mon[52886]: Deploying daemon osd.1 on vm08 2026-03-10T06:42:37.145 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:36 vm08 ceph-mon[52886]: pgmap v21: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T06:42:37.145 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:36 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:42:37.145 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:36 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:37.145 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:36 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:37.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:36 vm09 ceph-mon[55409]: pgmap v21: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T06:42:37.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:36 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:42:37.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:36 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:37.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:36 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:37.360 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:36 vm01 ceph-mon[50525]: pgmap v21: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T06:42:37.360 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:36 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:42:37.360 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:36 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:37.360 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:36 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:37.395 INFO:teuthology.orchestra.run.vm08.stdout:Created osd(s) 1 on host 'vm08' 2026-03-10T06:42:37.448 DEBUG:teuthology.orchestra.run.vm08:osd.1> sudo journalctl -f -n 0 -u ceph-0204d884-1c4c-11f1-a017-9957fb527c0e@osd.1.service 2026-03-10T06:42:37.449 INFO:tasks.cephadm:Deploying osd.2 on vm09 with /dev/vde... 2026-03-10T06:42:37.449 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e -- lvm zap /dev/vde 2026-03-10T06:42:37.600 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/mon.c/config 2026-03-10T06:42:37.989 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:37 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:37.990 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:37 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:37.990 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:37 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:37.990 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:37 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T06:42:37.990 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:37 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:37.990 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:37 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:42:37.990 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:37 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:37.990 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:37 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:37.990 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:37 vm09 ceph-mon[55409]: pgmap v22: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T06:42:38.233 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:37 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:38.233 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:37 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:38.234 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:37 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:38.234 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:37 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T06:42:38.234 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:37 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:38.234 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:37 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:42:38.234 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:37 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:38.234 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:37 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:38.234 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:37 vm08 ceph-mon[52886]: pgmap v22: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T06:42:38.234 INFO:journalctl@ceph.osd.1.vm08.stdout:Mar 10 06:42:38 vm08 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-osd-1[56477]: 2026-03-10T06:42:38.099+0000 7f78e45b8740 -1 osd.1 0 log_to_monitors true 2026-03-10T06:42:38.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:37 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:38.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:37 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:38.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:37 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:38.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:37 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T06:42:38.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:37 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:38.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:37 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:42:38.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:37 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:38.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:37 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:38.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:37 vm01 ceph-mon[50525]: pgmap v22: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T06:42:38.381 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:42:38.397 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e -- ceph orch daemon add osd vm09:/dev/vde 2026-03-10T06:42:38.548 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/mon.c/config 2026-03-10T06:42:39.286 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:38 vm09 ceph-mon[55409]: from='osd.1 [v2:192.168.123.108:6800/3501309662,v1:192.168.123.108:6801/3501309662]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T06:42:39.286 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:38 vm09 ceph-mon[55409]: from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T06:42:39.286 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:38 vm09 ceph-mon[55409]: Detected new or changed devices on vm08 2026-03-10T06:42:39.286 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:38 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:39.286 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:38 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:39.286 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:38 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-10T06:42:39.286 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:38 vm09 ceph-mon[55409]: Adjusting osd_memory_target on vm08 to 257.0M 2026-03-10T06:42:39.286 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:38 vm09 ceph-mon[55409]: Unable to set osd_memory_target on vm08 to 269530726: error parsing value: Value '269530726' is below minimum 939524096 2026-03-10T06:42:39.286 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:38 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:39.286 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:38 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T06:42:39.286 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:38 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:39.286 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:38 vm09 ceph-mon[55409]: from='client.24157 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T06:42:39.286 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:38 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T06:42:39.286 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:38 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T06:42:39.286 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:38 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:39.360 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:38 vm01 ceph-mon[50525]: from='osd.1 [v2:192.168.123.108:6800/3501309662,v1:192.168.123.108:6801/3501309662]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T06:42:39.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:38 vm01 ceph-mon[50525]: from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T06:42:39.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:38 vm01 ceph-mon[50525]: Detected new or changed devices on vm08 2026-03-10T06:42:39.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:38 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:39.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:38 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:39.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:38 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-10T06:42:39.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:38 vm01 ceph-mon[50525]: Adjusting osd_memory_target on vm08 to 257.0M 2026-03-10T06:42:39.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:38 vm01 ceph-mon[50525]: Unable to set osd_memory_target on vm08 to 269530726: error parsing value: Value '269530726' is below minimum 939524096 2026-03-10T06:42:39.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:38 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:39.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:38 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T06:42:39.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:38 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:39.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:38 vm01 ceph-mon[50525]: from='client.24157 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T06:42:39.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:38 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T06:42:39.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:38 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T06:42:39.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:38 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:39.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:38 vm08 ceph-mon[52886]: from='osd.1 [v2:192.168.123.108:6800/3501309662,v1:192.168.123.108:6801/3501309662]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T06:42:39.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:38 vm08 ceph-mon[52886]: from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T06:42:39.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:38 vm08 ceph-mon[52886]: Detected new or changed devices on vm08 2026-03-10T06:42:39.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:38 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:39.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:38 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:39.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:38 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-10T06:42:39.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:38 vm08 ceph-mon[52886]: Adjusting osd_memory_target on vm08 to 257.0M 2026-03-10T06:42:39.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:38 vm08 ceph-mon[52886]: Unable to set osd_memory_target on vm08 to 269530726: error parsing value: Value '269530726' is below minimum 939524096 2026-03-10T06:42:39.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:38 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:39.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:38 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T06:42:39.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:38 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:39.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:38 vm08 ceph-mon[52886]: from='client.24157 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T06:42:39.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:38 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T06:42:39.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:38 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T06:42:39.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:38 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:40.354 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:39 vm08 ceph-mon[52886]: from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T06:42:40.354 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:39 vm08 ceph-mon[52886]: osdmap e10: 2 total, 1 up, 2 in 2026-03-10T06:42:40.354 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:39 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T06:42:40.354 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:39 vm08 ceph-mon[52886]: from='osd.1 [v2:192.168.123.108:6800/3501309662,v1:192.168.123.108:6801/3501309662]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-10T06:42:40.355 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:39 vm08 ceph-mon[52886]: from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-10T06:42:40.355 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:39 vm08 ceph-mon[52886]: from='client.? 192.168.123.109:0/1304877356' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9182fe25-a791-4c18-b7b7-ed943e90fb42"}]: dispatch 2026-03-10T06:42:40.355 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:39 vm08 ceph-mon[52886]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9182fe25-a791-4c18-b7b7-ed943e90fb42"}]: dispatch 2026-03-10T06:42:40.355 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:39 vm08 ceph-mon[52886]: from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm08", "root=default"]}]': finished 2026-03-10T06:42:40.355 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:39 vm08 ceph-mon[52886]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "9182fe25-a791-4c18-b7b7-ed943e90fb42"}]': finished 2026-03-10T06:42:40.355 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:39 vm08 ceph-mon[52886]: osdmap e11: 3 total, 1 up, 3 in 2026-03-10T06:42:40.355 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:39 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T06:42:40.355 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:39 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T06:42:40.355 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:39 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T06:42:40.355 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:39 vm08 ceph-mon[52886]: from='client.? 192.168.123.109:0/2953662365' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T06:42:40.355 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:39 vm08 ceph-mon[52886]: pgmap v25: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T06:42:40.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:39 vm09 ceph-mon[55409]: from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T06:42:40.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:39 vm09 ceph-mon[55409]: osdmap e10: 2 total, 1 up, 2 in 2026-03-10T06:42:40.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:39 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T06:42:40.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:39 vm09 ceph-mon[55409]: from='osd.1 [v2:192.168.123.108:6800/3501309662,v1:192.168.123.108:6801/3501309662]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-10T06:42:40.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:39 vm09 ceph-mon[55409]: from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-10T06:42:40.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:39 vm09 ceph-mon[55409]: from='client.? 192.168.123.109:0/1304877356' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9182fe25-a791-4c18-b7b7-ed943e90fb42"}]: dispatch 2026-03-10T06:42:40.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:39 vm09 ceph-mon[55409]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9182fe25-a791-4c18-b7b7-ed943e90fb42"}]: dispatch 2026-03-10T06:42:40.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:39 vm09 ceph-mon[55409]: from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm08", "root=default"]}]': finished 2026-03-10T06:42:40.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:39 vm09 ceph-mon[55409]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "9182fe25-a791-4c18-b7b7-ed943e90fb42"}]': finished 2026-03-10T06:42:40.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:39 vm09 ceph-mon[55409]: osdmap e11: 3 total, 1 up, 3 in 2026-03-10T06:42:40.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:39 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T06:42:40.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:39 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T06:42:40.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:39 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T06:42:40.358 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:39 vm09 ceph-mon[55409]: from='client.? 192.168.123.109:0/2953662365' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T06:42:40.358 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:39 vm09 ceph-mon[55409]: pgmap v25: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T06:42:40.360 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:39 vm01 ceph-mon[50525]: from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T06:42:40.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:39 vm01 ceph-mon[50525]: osdmap e10: 2 total, 1 up, 2 in 2026-03-10T06:42:40.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:39 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T06:42:40.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:39 vm01 ceph-mon[50525]: from='osd.1 [v2:192.168.123.108:6800/3501309662,v1:192.168.123.108:6801/3501309662]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-10T06:42:40.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:39 vm01 ceph-mon[50525]: from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-10T06:42:40.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:39 vm01 ceph-mon[50525]: from='client.? 192.168.123.109:0/1304877356' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9182fe25-a791-4c18-b7b7-ed943e90fb42"}]: dispatch 2026-03-10T06:42:40.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:39 vm01 ceph-mon[50525]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9182fe25-a791-4c18-b7b7-ed943e90fb42"}]: dispatch 2026-03-10T06:42:40.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:39 vm01 ceph-mon[50525]: from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm08", "root=default"]}]': finished 2026-03-10T06:42:40.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:39 vm01 ceph-mon[50525]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "9182fe25-a791-4c18-b7b7-ed943e90fb42"}]': finished 2026-03-10T06:42:40.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:39 vm01 ceph-mon[50525]: osdmap e11: 3 total, 1 up, 3 in 2026-03-10T06:42:40.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:39 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T06:42:40.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:39 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T06:42:40.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:39 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T06:42:40.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:39 vm01 ceph-mon[50525]: from='client.? 192.168.123.109:0/2953662365' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T06:42:40.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:39 vm01 ceph-mon[50525]: pgmap v25: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T06:42:40.644 INFO:journalctl@ceph.osd.1.vm08.stdout:Mar 10 06:42:40 vm08 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-osd-1[56477]: 2026-03-10T06:42:40.353+0000 7f78e0539640 -1 osd.1 0 waiting for initial osdmap 2026-03-10T06:42:40.644 INFO:journalctl@ceph.osd.1.vm08.stdout:Mar 10 06:42:40 vm08 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-osd-1[56477]: 2026-03-10T06:42:40.363+0000 7f78dbb62640 -1 osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T06:42:41.645 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:41 vm08 ceph-mon[52886]: purged_snaps scrub starts 2026-03-10T06:42:41.645 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:41 vm08 ceph-mon[52886]: purged_snaps scrub ok 2026-03-10T06:42:41.645 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:41 vm08 ceph-mon[52886]: from='osd.1 ' entity='osd.1' 2026-03-10T06:42:41.645 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:41 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T06:42:41.856 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:41 vm09 ceph-mon[55409]: purged_snaps scrub starts 2026-03-10T06:42:41.857 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:41 vm09 ceph-mon[55409]: purged_snaps scrub ok 2026-03-10T06:42:41.857 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:41 vm09 ceph-mon[55409]: from='osd.1 ' entity='osd.1' 2026-03-10T06:42:41.857 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:41 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T06:42:41.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:41 vm01 ceph-mon[50525]: purged_snaps scrub starts 2026-03-10T06:42:41.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:41 vm01 ceph-mon[50525]: purged_snaps scrub ok 2026-03-10T06:42:41.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:41 vm01 ceph-mon[50525]: from='osd.1 ' entity='osd.1' 2026-03-10T06:42:41.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:41 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T06:42:42.644 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:42 vm08 ceph-mon[52886]: osd.1 [v2:192.168.123.108:6800/3501309662,v1:192.168.123.108:6801/3501309662] boot 2026-03-10T06:42:42.644 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:42 vm08 ceph-mon[52886]: osdmap e12: 3 total, 2 up, 3 in 2026-03-10T06:42:42.644 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:42 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T06:42:42.644 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:42 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T06:42:42.644 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:42 vm08 ceph-mon[52886]: pgmap v27: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T06:42:42.822 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:42 vm09 ceph-mon[55409]: osd.1 [v2:192.168.123.108:6800/3501309662,v1:192.168.123.108:6801/3501309662] boot 2026-03-10T06:42:42.822 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:42 vm09 ceph-mon[55409]: osdmap e12: 3 total, 2 up, 3 in 2026-03-10T06:42:42.822 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:42 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T06:42:42.822 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:42 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T06:42:42.822 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:42 vm09 ceph-mon[55409]: pgmap v27: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T06:42:42.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:42 vm01 ceph-mon[50525]: osd.1 [v2:192.168.123.108:6800/3501309662,v1:192.168.123.108:6801/3501309662] boot 2026-03-10T06:42:42.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:42 vm01 ceph-mon[50525]: osdmap e12: 3 total, 2 up, 3 in 2026-03-10T06:42:42.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:42 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T06:42:42.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:42 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T06:42:42.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:42 vm01 ceph-mon[50525]: pgmap v27: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T06:42:43.386 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:43 vm09 ceph-mon[55409]: osdmap e13: 3 total, 2 up, 3 in 2026-03-10T06:42:43.386 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:43 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T06:42:43.644 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:43 vm08 ceph-mon[52886]: osdmap e13: 3 total, 2 up, 3 in 2026-03-10T06:42:43.644 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:43 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T06:42:43.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:43 vm01 ceph-mon[50525]: osdmap e13: 3 total, 2 up, 3 in 2026-03-10T06:42:43.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:43 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T06:42:44.470 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:44 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T06:42:44.470 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:44 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:44.470 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:44 vm09 ceph-mon[55409]: Deploying daemon osd.2 on vm09 2026-03-10T06:42:44.470 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:44 vm09 ceph-mon[55409]: pgmap v29: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T06:42:44.644 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:44 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T06:42:44.644 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:44 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:44.644 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:44 vm08 ceph-mon[52886]: Deploying daemon osd.2 on vm09 2026-03-10T06:42:44.644 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:44 vm08 ceph-mon[52886]: pgmap v29: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T06:42:44.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:44 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T06:42:44.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:44 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:44.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:44 vm01 ceph-mon[50525]: Deploying daemon osd.2 on vm09 2026-03-10T06:42:44.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:44 vm01 ceph-mon[50525]: pgmap v29: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T06:42:45.607 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:45 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:42:45.608 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:45 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:45.608 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:45 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:45.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:45 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:42:45.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:45 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:45.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:45 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:45.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:45 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:42:45.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:45 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:45.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:45 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:46.527 INFO:teuthology.orchestra.run.vm09.stdout:Created osd(s) 2 on host 'vm09' 2026-03-10T06:42:46.589 DEBUG:teuthology.orchestra.run.vm09:osd.2> sudo journalctl -f -n 0 -u ceph-0204d884-1c4c-11f1-a017-9957fb527c0e@osd.2.service 2026-03-10T06:42:46.592 INFO:tasks.cephadm:Waiting for 3 OSDs to come up... 2026-03-10T06:42:46.592 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e -- ceph osd stat -f json 2026-03-10T06:42:46.756 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:46 vm09 ceph-mon[55409]: pgmap v30: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T06:42:46.756 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:46 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:46.756 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:46 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:46.756 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:46 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:46.756 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:46 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T06:42:46.756 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:46 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:46.756 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:46 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:42:46.756 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:46 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:46.756 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:46 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:46.765 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/mon.a/config 2026-03-10T06:42:46.827 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:46 vm01 ceph-mon[50525]: pgmap v30: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T06:42:46.827 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:46 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:46.827 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:46 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:46.827 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:46 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:46.828 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:46 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T06:42:46.828 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:46 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:46.828 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:46 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:42:46.828 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:46 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:46.828 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:46 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:46.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:46 vm08 ceph-mon[52886]: pgmap v30: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T06:42:46.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:46 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:46.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:46 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:46.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:46 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:46.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:46 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T06:42:46.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:46 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:46.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:46 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:42:46.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:46 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:46.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:46 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:46.992 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:42:47.052 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":13,"num_osds":3,"num_up_osds":2,"osd_up_since":1773124961,"num_in_osds":3,"osd_in_since":1773124959,"num_remapped_pgs":0} 2026-03-10T06:42:47.407 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 06:42:47 vm09 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-osd-2[58808]: 2026-03-10T06:42:47.148+0000 7fa4e50a0740 -1 osd.2 0 log_to_monitors true 2026-03-10T06:42:47.856 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:47 vm09 ceph-mon[55409]: from='client.? 192.168.123.101:0/3025642001' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T06:42:47.857 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:47 vm09 ceph-mon[55409]: from='osd.2 [v2:192.168.123.109:6800/3490681787,v1:192.168.123.109:6801/3490681787]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T06:42:47.857 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:47 vm09 ceph-mon[55409]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T06:42:47.857 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:47 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:47.857 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:47 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:47.857 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:47 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-10T06:42:47.857 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:47 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:47.857 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:47 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:47.857 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:47 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T06:42:47.857 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:47 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:47.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:47 vm01 ceph-mon[50525]: from='client.? 192.168.123.101:0/3025642001' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T06:42:47.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:47 vm01 ceph-mon[50525]: from='osd.2 [v2:192.168.123.109:6800/3490681787,v1:192.168.123.109:6801/3490681787]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T06:42:47.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:47 vm01 ceph-mon[50525]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T06:42:47.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:47 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:47.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:47 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:47.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:47 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-10T06:42:47.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:47 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:47.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:47 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:47.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:47 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T06:42:47.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:47 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:47.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:47 vm08 ceph-mon[52886]: from='client.? 192.168.123.101:0/3025642001' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T06:42:47.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:47 vm08 ceph-mon[52886]: from='osd.2 [v2:192.168.123.109:6800/3490681787,v1:192.168.123.109:6801/3490681787]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T06:42:47.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:47 vm08 ceph-mon[52886]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T06:42:47.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:47 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:47.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:47 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:47.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:47 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-10T06:42:47.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:47 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:47.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:47 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:42:47.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:47 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T06:42:47.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:47 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:42:48.053 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e -- ceph osd stat -f json 2026-03-10T06:42:48.213 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/mon.a/config 2026-03-10T06:42:48.430 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:42:48.476 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":14,"num_osds":3,"num_up_osds":2,"osd_up_since":1773124961,"num_in_osds":3,"osd_in_since":1773124959,"num_remapped_pgs":0} 2026-03-10T06:42:48.589 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:48 vm01 ceph-mon[50525]: Detected new or changed devices on vm09 2026-03-10T06:42:48.589 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:48 vm01 ceph-mon[50525]: Adjusting osd_memory_target on vm09 to 4353M 2026-03-10T06:42:48.589 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:48 vm01 ceph-mon[50525]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T06:42:48.589 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:48 vm01 ceph-mon[50525]: osdmap e14: 3 total, 2 up, 3 in 2026-03-10T06:42:48.589 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:48 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T06:42:48.589 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:48 vm01 ceph-mon[50525]: from='osd.2 [v2:192.168.123.109:6800/3490681787,v1:192.168.123.109:6801/3490681787]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T06:42:48.589 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:48 vm01 ceph-mon[50525]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T06:42:48.589 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:48 vm01 ceph-mon[50525]: pgmap v32: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T06:42:48.589 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:48 vm01 ceph-mon[50525]: from='client.? 192.168.123.101:0/1556977035' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T06:42:48.856 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:48 vm09 ceph-mon[55409]: Detected new or changed devices on vm09 2026-03-10T06:42:48.856 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:48 vm09 ceph-mon[55409]: Adjusting osd_memory_target on vm09 to 4353M 2026-03-10T06:42:48.856 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:48 vm09 ceph-mon[55409]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T06:42:48.857 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:48 vm09 ceph-mon[55409]: osdmap e14: 3 total, 2 up, 3 in 2026-03-10T06:42:48.857 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:48 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T06:42:48.857 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:48 vm09 ceph-mon[55409]: from='osd.2 [v2:192.168.123.109:6800/3490681787,v1:192.168.123.109:6801/3490681787]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T06:42:48.857 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:48 vm09 ceph-mon[55409]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T06:42:48.857 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:48 vm09 ceph-mon[55409]: pgmap v32: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T06:42:48.857 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:48 vm09 ceph-mon[55409]: from='client.? 192.168.123.101:0/1556977035' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T06:42:48.857 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 06:42:48 vm09 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-osd-2[58808]: 2026-03-10T06:42:48.591+0000 7fa4e1834640 -1 osd.2 0 waiting for initial osdmap 2026-03-10T06:42:48.857 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 06:42:48 vm09 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-osd-2[58808]: 2026-03-10T06:42:48.595+0000 7fa4dce4b640 -1 osd.2 15 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T06:42:48.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:48 vm08 ceph-mon[52886]: Detected new or changed devices on vm09 2026-03-10T06:42:48.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:48 vm08 ceph-mon[52886]: Adjusting osd_memory_target on vm09 to 4353M 2026-03-10T06:42:48.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:48 vm08 ceph-mon[52886]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T06:42:48.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:48 vm08 ceph-mon[52886]: osdmap e14: 3 total, 2 up, 3 in 2026-03-10T06:42:48.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:48 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T06:42:48.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:48 vm08 ceph-mon[52886]: from='osd.2 [v2:192.168.123.109:6800/3490681787,v1:192.168.123.109:6801/3490681787]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T06:42:48.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:48 vm08 ceph-mon[52886]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T06:42:48.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:48 vm08 ceph-mon[52886]: pgmap v32: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T06:42:48.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:48 vm08 ceph-mon[52886]: from='client.? 192.168.123.101:0/1556977035' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T06:42:49.477 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e -- ceph osd stat -f json 2026-03-10T06:42:49.648 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/mon.a/config 2026-03-10T06:42:49.762 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:49 vm01 ceph-mon[50525]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T06:42:49.762 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:49 vm01 ceph-mon[50525]: osdmap e15: 3 total, 2 up, 3 in 2026-03-10T06:42:49.762 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:49 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T06:42:49.762 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:49 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T06:42:49.854 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:42:49.856 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:49 vm09 ceph-mon[55409]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T06:42:49.856 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:49 vm09 ceph-mon[55409]: osdmap e15: 3 total, 2 up, 3 in 2026-03-10T06:42:49.856 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:49 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T06:42:49.856 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:49 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T06:42:49.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:49 vm08 ceph-mon[52886]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T06:42:49.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:49 vm08 ceph-mon[52886]: osdmap e15: 3 total, 2 up, 3 in 2026-03-10T06:42:49.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:49 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T06:42:49.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:49 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T06:42:49.919 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":16,"num_osds":3,"num_up_osds":3,"osd_up_since":1773124969,"num_in_osds":3,"osd_in_since":1773124959,"num_remapped_pgs":0} 2026-03-10T06:42:49.920 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e -- ceph osd dump --format=json 2026-03-10T06:42:50.079 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/mon.a/config 2026-03-10T06:42:50.288 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:42:50.288 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":16,"fsid":"0204d884-1c4c-11f1-a017-9957fb527c0e","created":"2026-03-10T06:41:32.129305+0000","modified":"2026-03-10T06:42:49.588758+0000","last_up_change":"2026-03-10T06:42:49.588758+0000","last_in_change":"2026-03-10T06:42:39.456464+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":8,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":0,"max_osd":3,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[],"osds":[{"osd":0,"uuid":"63e03954-df21-4442-a789-b5f9f56cd792","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6802","nonce":2271571456},{"type":"v1","addr":"192.168.123.101:6803","nonce":2271571456}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6804","nonce":2271571456},{"type":"v1","addr":"192.168.123.101:6805","nonce":2271571456}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6808","nonce":2271571456},{"type":"v1","addr":"192.168.123.101:6809","nonce":2271571456}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6806","nonce":2271571456},{"type":"v1","addr":"192.168.123.101:6807","nonce":2271571456}]},"public_addr":"192.168.123.101:6803/2271571456","cluster_addr":"192.168.123.101:6805/2271571456","heartbeat_back_addr":"192.168.123.101:6809/2271571456","heartbeat_front_addr":"192.168.123.101:6807/2271571456","state":["exists","up"]},{"osd":1,"uuid":"48860b07-2f08-4996-aacf-d45d8f4e30c7","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":12,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6800","nonce":3501309662},{"type":"v1","addr":"192.168.123.108:6801","nonce":3501309662}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6802","nonce":3501309662},{"type":"v1","addr":"192.168.123.108:6803","nonce":3501309662}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6806","nonce":3501309662},{"type":"v1","addr":"192.168.123.108:6807","nonce":3501309662}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6804","nonce":3501309662},{"type":"v1","addr":"192.168.123.108:6805","nonce":3501309662}]},"public_addr":"192.168.123.108:6801/3501309662","cluster_addr":"192.168.123.108:6803/3501309662","heartbeat_back_addr":"192.168.123.108:6807/3501309662","heartbeat_front_addr":"192.168.123.108:6805/3501309662","state":["exists","up"]},{"osd":2,"uuid":"9182fe25-a791-4c18-b7b7-ed943e90fb42","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":16,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6800","nonce":3490681787},{"type":"v1","addr":"192.168.123.109:6801","nonce":3490681787}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6802","nonce":3490681787},{"type":"v1","addr":"192.168.123.109:6803","nonce":3490681787}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6806","nonce":3490681787},{"type":"v1","addr":"192.168.123.109:6807","nonce":3490681787}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6804","nonce":3490681787},{"type":"v1","addr":"192.168.123.109:6805","nonce":3490681787}]},"public_addr":"192.168.123.109:6801/3490681787","cluster_addr":"192.168.123.109:6803/3490681787","heartbeat_back_addr":"192.168.123.109:6807/3490681787","heartbeat_front_addr":"192.168.123.109:6805/3490681787","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T06:42:30.574188+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T06:42:39.135393+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.101:0/2770342804":"2026-03-11T06:41:51.890654+0000","192.168.123.101:6801/2407550564":"2026-03-11T06:41:51.890654+0000","192.168.123.101:0/1792562782":"2026-03-11T06:41:51.890654+0000","192.168.123.101:0/4100973882":"2026-03-11T06:41:51.890654+0000","192.168.123.101:0/1249315581":"2026-03-11T06:41:42.713593+0000","192.168.123.101:0/4023160557":"2026-03-11T06:41:42.713593+0000","192.168.123.101:0/3209905793":"2026-03-11T06:41:42.713593+0000","192.168.123.101:6800/2407550564":"2026-03-11T06:41:51.890654+0000","192.168.123.101:6801/2296463104":"2026-03-11T06:41:42.713593+0000","192.168.123.101:6800/2296463104":"2026-03-11T06:41:42.713593+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T06:42:50.350 INFO:tasks.cephadm.ceph_manager.ceph:[] 2026-03-10T06:42:50.350 INFO:tasks.cephadm:Setting up client nodes... 2026-03-10T06:42:50.350 INFO:tasks.ceph:Waiting until ceph daemons up and pgs clean... 2026-03-10T06:42:50.350 INFO:tasks.cephadm.ceph_manager.ceph:waiting for mgr available 2026-03-10T06:42:50.350 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e -- ceph mgr dump --format=json 2026-03-10T06:42:50.510 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/mon.a/config 2026-03-10T06:42:50.624 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:50 vm01 ceph-mon[50525]: purged_snaps scrub starts 2026-03-10T06:42:50.624 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:50 vm01 ceph-mon[50525]: purged_snaps scrub ok 2026-03-10T06:42:50.624 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:50 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T06:42:50.624 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:50 vm01 ceph-mon[50525]: osd.2 [v2:192.168.123.109:6800/3490681787,v1:192.168.123.109:6801/3490681787] boot 2026-03-10T06:42:50.624 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:50 vm01 ceph-mon[50525]: osdmap e16: 3 total, 3 up, 3 in 2026-03-10T06:42:50.624 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:50 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T06:42:50.624 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:50 vm01 ceph-mon[50525]: from='client.? 192.168.123.101:0/2201186629' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T06:42:50.624 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:50 vm01 ceph-mon[50525]: pgmap v35: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T06:42:50.624 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:50 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T06:42:50.624 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:50 vm01 ceph-mon[50525]: from='client.? 192.168.123.101:0/372632698' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T06:42:50.743 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:42:50.806 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":14,"flags":0,"active_gid":14150,"active_name":"a","active_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6800","nonce":4258148063},{"type":"v1","addr":"192.168.123.101:6801","nonce":4258148063}]},"active_addr":"192.168.123.101:6801/4258148063","active_change":"2026-03-10T06:41:51.890743+0000","active_mgr_features":4540701547738038271,"available":true,"standbys":[{"gid":24116,"name":"b","mgr_features":4540701547738038271,"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}]}],"modules":["cephadm","dashboard","iostat","nfs","restful"],"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}],"services":{"dashboard":"https://192.168.123.101:8443/"},"always_on_modules":{"octopus":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"pacific":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"quincy":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"reef":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"squid":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"]},"force_disabled_modules":{},"last_failure_osd_epoch":3,"active_clients":[{"name":"libcephsqlite","addrvec":[{"type":"v2","addr":"192.168.123.101:0","nonce":2437998718}]},{"name":"rbd_support","addrvec":[{"type":"v2","addr":"192.168.123.101:0","nonce":800185767}]},{"name":"volumes","addrvec":[{"type":"v2","addr":"192.168.123.101:0","nonce":3862859674}]}]} 2026-03-10T06:42:50.808 INFO:tasks.cephadm.ceph_manager.ceph:mgr available! 2026-03-10T06:42:50.808 INFO:tasks.cephadm.ceph_manager.ceph:waiting for all up 2026-03-10T06:42:50.808 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e -- ceph osd dump --format=json 2026-03-10T06:42:50.856 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:50 vm09 ceph-mon[55409]: purged_snaps scrub starts 2026-03-10T06:42:50.856 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:50 vm09 ceph-mon[55409]: purged_snaps scrub ok 2026-03-10T06:42:50.857 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:50 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T06:42:50.857 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:50 vm09 ceph-mon[55409]: osd.2 [v2:192.168.123.109:6800/3490681787,v1:192.168.123.109:6801/3490681787] boot 2026-03-10T06:42:50.857 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:50 vm09 ceph-mon[55409]: osdmap e16: 3 total, 3 up, 3 in 2026-03-10T06:42:50.857 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:50 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T06:42:50.857 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:50 vm09 ceph-mon[55409]: from='client.? 192.168.123.101:0/2201186629' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T06:42:50.857 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:50 vm09 ceph-mon[55409]: pgmap v35: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T06:42:50.857 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:50 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T06:42:50.857 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:50 vm09 ceph-mon[55409]: from='client.? 192.168.123.101:0/372632698' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T06:42:50.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:50 vm08 ceph-mon[52886]: purged_snaps scrub starts 2026-03-10T06:42:50.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:50 vm08 ceph-mon[52886]: purged_snaps scrub ok 2026-03-10T06:42:50.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:50 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T06:42:50.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:50 vm08 ceph-mon[52886]: osd.2 [v2:192.168.123.109:6800/3490681787,v1:192.168.123.109:6801/3490681787] boot 2026-03-10T06:42:50.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:50 vm08 ceph-mon[52886]: osdmap e16: 3 total, 3 up, 3 in 2026-03-10T06:42:50.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:50 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T06:42:50.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:50 vm08 ceph-mon[52886]: from='client.? 192.168.123.101:0/2201186629' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T06:42:50.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:50 vm08 ceph-mon[52886]: pgmap v35: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T06:42:50.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:50 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T06:42:50.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:50 vm08 ceph-mon[52886]: from='client.? 192.168.123.101:0/372632698' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T06:42:50.964 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/mon.a/config 2026-03-10T06:42:51.166 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:42:51.166 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":17,"fsid":"0204d884-1c4c-11f1-a017-9957fb527c0e","created":"2026-03-10T06:41:32.129305+0000","modified":"2026-03-10T06:42:50.595865+0000","last_up_change":"2026-03-10T06:42:49.588758+0000","last_in_change":"2026-03-10T06:42:39.456464+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":8,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":3,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T06:42:49.928733+0000","flags":32769,"flags_names":"hashpspool,creating","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"17","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{},"read_balance":{"score_type":"Fair distribution","score_acting":3,"score_stable":3,"optimal_score":1,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"63e03954-df21-4442-a789-b5f9f56cd792","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6802","nonce":2271571456},{"type":"v1","addr":"192.168.123.101:6803","nonce":2271571456}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6804","nonce":2271571456},{"type":"v1","addr":"192.168.123.101:6805","nonce":2271571456}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6808","nonce":2271571456},{"type":"v1","addr":"192.168.123.101:6809","nonce":2271571456}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6806","nonce":2271571456},{"type":"v1","addr":"192.168.123.101:6807","nonce":2271571456}]},"public_addr":"192.168.123.101:6803/2271571456","cluster_addr":"192.168.123.101:6805/2271571456","heartbeat_back_addr":"192.168.123.101:6809/2271571456","heartbeat_front_addr":"192.168.123.101:6807/2271571456","state":["exists","up"]},{"osd":1,"uuid":"48860b07-2f08-4996-aacf-d45d8f4e30c7","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":12,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6800","nonce":3501309662},{"type":"v1","addr":"192.168.123.108:6801","nonce":3501309662}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6802","nonce":3501309662},{"type":"v1","addr":"192.168.123.108:6803","nonce":3501309662}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6806","nonce":3501309662},{"type":"v1","addr":"192.168.123.108:6807","nonce":3501309662}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6804","nonce":3501309662},{"type":"v1","addr":"192.168.123.108:6805","nonce":3501309662}]},"public_addr":"192.168.123.108:6801/3501309662","cluster_addr":"192.168.123.108:6803/3501309662","heartbeat_back_addr":"192.168.123.108:6807/3501309662","heartbeat_front_addr":"192.168.123.108:6805/3501309662","state":["exists","up"]},{"osd":2,"uuid":"9182fe25-a791-4c18-b7b7-ed943e90fb42","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":16,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6800","nonce":3490681787},{"type":"v1","addr":"192.168.123.109:6801","nonce":3490681787}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6802","nonce":3490681787},{"type":"v1","addr":"192.168.123.109:6803","nonce":3490681787}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6806","nonce":3490681787},{"type":"v1","addr":"192.168.123.109:6807","nonce":3490681787}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6804","nonce":3490681787},{"type":"v1","addr":"192.168.123.109:6805","nonce":3490681787}]},"public_addr":"192.168.123.109:6801/3490681787","cluster_addr":"192.168.123.109:6803/3490681787","heartbeat_back_addr":"192.168.123.109:6807/3490681787","heartbeat_front_addr":"192.168.123.109:6805/3490681787","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T06:42:30.574188+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T06:42:39.135393+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T06:42:48.119328+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.101:0/2770342804":"2026-03-11T06:41:51.890654+0000","192.168.123.101:6801/2407550564":"2026-03-11T06:41:51.890654+0000","192.168.123.101:0/1792562782":"2026-03-11T06:41:51.890654+0000","192.168.123.101:0/4100973882":"2026-03-11T06:41:51.890654+0000","192.168.123.101:0/1249315581":"2026-03-11T06:41:42.713593+0000","192.168.123.101:0/4023160557":"2026-03-11T06:41:42.713593+0000","192.168.123.101:0/3209905793":"2026-03-11T06:41:42.713593+0000","192.168.123.101:6800/2407550564":"2026-03-11T06:41:51.890654+0000","192.168.123.101:6801/2296463104":"2026-03-11T06:41:42.713593+0000","192.168.123.101:6800/2296463104":"2026-03-11T06:41:42.713593+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T06:42:51.224 INFO:tasks.cephadm.ceph_manager.ceph:all up! 2026-03-10T06:42:51.224 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e -- ceph osd dump --format=json 2026-03-10T06:42:51.372 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/mon.a/config 2026-03-10T06:42:51.579 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:42:51.579 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":17,"fsid":"0204d884-1c4c-11f1-a017-9957fb527c0e","created":"2026-03-10T06:41:32.129305+0000","modified":"2026-03-10T06:42:50.595865+0000","last_up_change":"2026-03-10T06:42:49.588758+0000","last_in_change":"2026-03-10T06:42:39.456464+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":8,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":3,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T06:42:49.928733+0000","flags":32769,"flags_names":"hashpspool,creating","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"17","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{},"read_balance":{"score_type":"Fair distribution","score_acting":3,"score_stable":3,"optimal_score":1,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"63e03954-df21-4442-a789-b5f9f56cd792","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6802","nonce":2271571456},{"type":"v1","addr":"192.168.123.101:6803","nonce":2271571456}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6804","nonce":2271571456},{"type":"v1","addr":"192.168.123.101:6805","nonce":2271571456}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6808","nonce":2271571456},{"type":"v1","addr":"192.168.123.101:6809","nonce":2271571456}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6806","nonce":2271571456},{"type":"v1","addr":"192.168.123.101:6807","nonce":2271571456}]},"public_addr":"192.168.123.101:6803/2271571456","cluster_addr":"192.168.123.101:6805/2271571456","heartbeat_back_addr":"192.168.123.101:6809/2271571456","heartbeat_front_addr":"192.168.123.101:6807/2271571456","state":["exists","up"]},{"osd":1,"uuid":"48860b07-2f08-4996-aacf-d45d8f4e30c7","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":12,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6800","nonce":3501309662},{"type":"v1","addr":"192.168.123.108:6801","nonce":3501309662}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6802","nonce":3501309662},{"type":"v1","addr":"192.168.123.108:6803","nonce":3501309662}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6806","nonce":3501309662},{"type":"v1","addr":"192.168.123.108:6807","nonce":3501309662}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6804","nonce":3501309662},{"type":"v1","addr":"192.168.123.108:6805","nonce":3501309662}]},"public_addr":"192.168.123.108:6801/3501309662","cluster_addr":"192.168.123.108:6803/3501309662","heartbeat_back_addr":"192.168.123.108:6807/3501309662","heartbeat_front_addr":"192.168.123.108:6805/3501309662","state":["exists","up"]},{"osd":2,"uuid":"9182fe25-a791-4c18-b7b7-ed943e90fb42","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":16,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6800","nonce":3490681787},{"type":"v1","addr":"192.168.123.109:6801","nonce":3490681787}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6802","nonce":3490681787},{"type":"v1","addr":"192.168.123.109:6803","nonce":3490681787}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6806","nonce":3490681787},{"type":"v1","addr":"192.168.123.109:6807","nonce":3490681787}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6804","nonce":3490681787},{"type":"v1","addr":"192.168.123.109:6805","nonce":3490681787}]},"public_addr":"192.168.123.109:6801/3490681787","cluster_addr":"192.168.123.109:6803/3490681787","heartbeat_back_addr":"192.168.123.109:6807/3490681787","heartbeat_front_addr":"192.168.123.109:6805/3490681787","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T06:42:30.574188+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T06:42:39.135393+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T06:42:48.119328+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.101:0/2770342804":"2026-03-11T06:41:51.890654+0000","192.168.123.101:6801/2407550564":"2026-03-11T06:41:51.890654+0000","192.168.123.101:0/1792562782":"2026-03-11T06:41:51.890654+0000","192.168.123.101:0/4100973882":"2026-03-11T06:41:51.890654+0000","192.168.123.101:0/1249315581":"2026-03-11T06:41:42.713593+0000","192.168.123.101:0/4023160557":"2026-03-11T06:41:42.713593+0000","192.168.123.101:0/3209905793":"2026-03-11T06:41:42.713593+0000","192.168.123.101:6800/2407550564":"2026-03-11T06:41:51.890654+0000","192.168.123.101:6801/2296463104":"2026-03-11T06:41:42.713593+0000","192.168.123.101:6800/2296463104":"2026-03-11T06:41:42.713593+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T06:42:51.632 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e -- ceph tell osd.0 flush_pg_stats 2026-03-10T06:42:51.632 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e -- ceph tell osd.1 flush_pg_stats 2026-03-10T06:42:51.632 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e -- ceph tell osd.2 flush_pg_stats 2026-03-10T06:42:51.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:51 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T06:42:51.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:51 vm01 ceph-mon[50525]: osdmap e17: 3 total, 3 up, 3 in 2026-03-10T06:42:51.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:51 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T06:42:51.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:51 vm01 ceph-mon[50525]: from='client.? 192.168.123.101:0/1119211990' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T06:42:51.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:51 vm01 ceph-mon[50525]: from='client.? 192.168.123.101:0/2040370920' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T06:42:51.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:51 vm01 ceph-mon[50525]: from='client.? 192.168.123.101:0/4210805343' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T06:42:51.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:51 vm01 sudo[63981]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda 2026-03-10T06:42:51.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:51 vm01 sudo[63981]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-10T06:42:51.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:51 vm01 sudo[63981]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-10T06:42:51.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:51 vm01 sudo[63981]: pam_unix(sudo:session): session closed for user root 2026-03-10T06:42:51.861 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 06:42:51 vm01 sudo[63970]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vde 2026-03-10T06:42:51.861 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 06:42:51 vm01 sudo[63970]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-10T06:42:51.861 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 06:42:51 vm01 sudo[63970]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-10T06:42:51.861 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 06:42:51 vm01 sudo[63970]: pam_unix(sudo:session): session closed for user root 2026-03-10T06:42:51.885 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/mon.a/config 2026-03-10T06:42:51.894 INFO:journalctl@ceph.osd.1.vm08.stdout:Mar 10 06:42:51 vm08 sudo[59701]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vde 2026-03-10T06:42:51.894 INFO:journalctl@ceph.osd.1.vm08.stdout:Mar 10 06:42:51 vm08 sudo[59701]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-10T06:42:51.894 INFO:journalctl@ceph.osd.1.vm08.stdout:Mar 10 06:42:51 vm08 sudo[59701]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-10T06:42:51.894 INFO:journalctl@ceph.osd.1.vm08.stdout:Mar 10 06:42:51 vm08 sudo[59701]: pam_unix(sudo:session): session closed for user root 2026-03-10T06:42:51.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:51 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T06:42:51.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:51 vm08 ceph-mon[52886]: osdmap e17: 3 total, 3 up, 3 in 2026-03-10T06:42:51.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:51 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T06:42:51.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:51 vm08 ceph-mon[52886]: from='client.? 192.168.123.101:0/1119211990' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T06:42:51.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:51 vm08 ceph-mon[52886]: from='client.? 192.168.123.101:0/2040370920' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T06:42:51.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:51 vm08 ceph-mon[52886]: from='client.? 192.168.123.101:0/4210805343' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T06:42:51.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:51 vm08 sudo[59705]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda 2026-03-10T06:42:51.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:51 vm08 sudo[59705]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-10T06:42:51.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:51 vm08 sudo[59705]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-10T06:42:51.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:51 vm08 sudo[59705]: pam_unix(sudo:session): session closed for user root 2026-03-10T06:42:51.988 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/mon.a/config 2026-03-10T06:42:52.001 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/mon.a/config 2026-03-10T06:42:52.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:51 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T06:42:52.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:51 vm09 ceph-mon[55409]: osdmap e17: 3 total, 3 up, 3 in 2026-03-10T06:42:52.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:51 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T06:42:52.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:51 vm09 ceph-mon[55409]: from='client.? 192.168.123.101:0/1119211990' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T06:42:52.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:51 vm09 ceph-mon[55409]: from='client.? 192.168.123.101:0/2040370920' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T06:42:52.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:51 vm09 ceph-mon[55409]: from='client.? 192.168.123.101:0/4210805343' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T06:42:52.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:51 vm09 sudo[61835]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda 2026-03-10T06:42:52.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:51 vm09 sudo[61835]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-10T06:42:52.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:51 vm09 sudo[61835]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-10T06:42:52.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:51 vm09 sudo[61835]: pam_unix(sudo:session): session closed for user root 2026-03-10T06:42:52.107 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 06:42:51 vm09 sudo[61831]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vde 2026-03-10T06:42:52.107 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 06:42:51 vm09 sudo[61831]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-10T06:42:52.107 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 06:42:51 vm09 sudo[61831]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-10T06:42:52.107 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 06:42:51 vm09 sudo[61831]: pam_unix(sudo:session): session closed for user root 2026-03-10T06:42:52.155 INFO:teuthology.orchestra.run.vm01.stdout:68719476738 2026-03-10T06:42:52.155 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e -- ceph osd last-stat-seq osd.2 2026-03-10T06:42:52.266 INFO:teuthology.orchestra.run.vm01.stdout:34359738373 2026-03-10T06:42:52.266 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e -- ceph osd last-stat-seq osd.0 2026-03-10T06:42:52.339 INFO:teuthology.orchestra.run.vm01.stdout:51539607556 2026-03-10T06:42:52.339 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e -- ceph osd last-stat-seq osd.1 2026-03-10T06:42:52.409 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/mon.a/config 2026-03-10T06:42:52.541 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/mon.a/config 2026-03-10T06:42:52.712 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/mon.a/config 2026-03-10T06:42:52.741 INFO:teuthology.orchestra.run.vm01.stdout:68719476737 2026-03-10T06:42:52.741 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:52 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T06:42:52.741 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:52 vm01 ceph-mon[50525]: osdmap e18: 3 total, 3 up, 3 in 2026-03-10T06:42:52.741 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:52 vm01 ceph-mon[50525]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T06:42:52.741 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:52 vm01 ceph-mon[50525]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T06:42:52.741 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:52 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T06:42:52.741 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:52 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T06:42:52.741 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:52 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T06:42:52.741 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:52 vm01 ceph-mon[50525]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T06:42:52.741 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:52 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T06:42:52.741 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:52 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T06:42:52.741 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:52 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T06:42:52.741 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:52 vm01 ceph-mon[50525]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T06:42:52.741 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:52 vm01 ceph-mon[50525]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T06:42:52.741 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:52 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T06:42:52.741 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:52 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T06:42:52.741 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:52 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T06:42:52.741 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:52 vm01 ceph-mon[50525]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T06:42:52.741 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:52 vm01 ceph-mon[50525]: pgmap v38: 1 pgs: 1 creating+peering; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:42:52.822 INFO:tasks.cephadm.ceph_manager.ceph:need seq 68719476738 got 68719476737 for osd.2 2026-03-10T06:42:52.888 INFO:teuthology.orchestra.run.vm01.stdout:34359738372 2026-03-10T06:42:52.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:52 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T06:42:52.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:52 vm08 ceph-mon[52886]: osdmap e18: 3 total, 3 up, 3 in 2026-03-10T06:42:52.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:52 vm08 ceph-mon[52886]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T06:42:52.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:52 vm08 ceph-mon[52886]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T06:42:52.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:52 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T06:42:52.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:52 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T06:42:52.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:52 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T06:42:52.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:52 vm08 ceph-mon[52886]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T06:42:52.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:52 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T06:42:52.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:52 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T06:42:52.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:52 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T06:42:52.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:52 vm08 ceph-mon[52886]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T06:42:52.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:52 vm08 ceph-mon[52886]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T06:42:52.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:52 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T06:42:52.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:52 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T06:42:52.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:52 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T06:42:52.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:52 vm08 ceph-mon[52886]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T06:42:52.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:52 vm08 ceph-mon[52886]: pgmap v38: 1 pgs: 1 creating+peering; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:42:52.937 INFO:tasks.cephadm.ceph_manager.ceph:need seq 34359738373 got 34359738372 for osd.0 2026-03-10T06:42:52.995 INFO:teuthology.orchestra.run.vm01.stdout:51539607555 2026-03-10T06:42:53.057 INFO:tasks.cephadm.ceph_manager.ceph:need seq 51539607556 got 51539607555 for osd.1 2026-03-10T06:42:53.106 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:52 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T06:42:53.106 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:52 vm09 ceph-mon[55409]: osdmap e18: 3 total, 3 up, 3 in 2026-03-10T06:42:53.106 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:52 vm09 ceph-mon[55409]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T06:42:53.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:52 vm09 ceph-mon[55409]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T06:42:53.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:52 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T06:42:53.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:52 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T06:42:53.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:52 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T06:42:53.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:52 vm09 ceph-mon[55409]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T06:42:53.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:52 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T06:42:53.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:52 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T06:42:53.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:52 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T06:42:53.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:52 vm09 ceph-mon[55409]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T06:42:53.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:52 vm09 ceph-mon[55409]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T06:42:53.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:52 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T06:42:53.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:52 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T06:42:53.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:52 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T06:42:53.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:52 vm09 ceph-mon[55409]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T06:42:53.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:52 vm09 ceph-mon[55409]: pgmap v38: 1 pgs: 1 creating+peering; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:42:53.823 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e -- ceph osd last-stat-seq osd.2 2026-03-10T06:42:53.893 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:53 vm08 ceph-mon[52886]: osdmap e19: 3 total, 3 up, 3 in 2026-03-10T06:42:53.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:53 vm08 ceph-mon[52886]: mgrmap e15: a(active, since 60s), standbys: b 2026-03-10T06:42:53.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:53 vm08 ceph-mon[52886]: from='client.? 192.168.123.101:0/83677801' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T06:42:53.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:53 vm08 ceph-mon[52886]: from='client.? 192.168.123.101:0/1583096102' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T06:42:53.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:53 vm08 ceph-mon[52886]: from='client.? 192.168.123.101:0/3488728767' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T06:42:53.938 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e -- ceph osd last-stat-seq osd.0 2026-03-10T06:42:53.977 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:53 vm01 ceph-mon[50525]: osdmap e19: 3 total, 3 up, 3 in 2026-03-10T06:42:53.977 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:53 vm01 ceph-mon[50525]: mgrmap e15: a(active, since 60s), standbys: b 2026-03-10T06:42:53.977 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:53 vm01 ceph-mon[50525]: from='client.? 192.168.123.101:0/83677801' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T06:42:53.977 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:53 vm01 ceph-mon[50525]: from='client.? 192.168.123.101:0/1583096102' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T06:42:53.977 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:53 vm01 ceph-mon[50525]: from='client.? 192.168.123.101:0/3488728767' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T06:42:53.979 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/mon.a/config 2026-03-10T06:42:54.057 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e -- ceph osd last-stat-seq osd.1 2026-03-10T06:42:54.106 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:53 vm09 ceph-mon[55409]: osdmap e19: 3 total, 3 up, 3 in 2026-03-10T06:42:54.106 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:53 vm09 ceph-mon[55409]: mgrmap e15: a(active, since 60s), standbys: b 2026-03-10T06:42:54.106 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:53 vm09 ceph-mon[55409]: from='client.? 192.168.123.101:0/83677801' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T06:42:54.106 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:53 vm09 ceph-mon[55409]: from='client.? 192.168.123.101:0/1583096102' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T06:42:54.106 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:53 vm09 ceph-mon[55409]: from='client.? 192.168.123.101:0/3488728767' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T06:42:54.176 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/mon.a/config 2026-03-10T06:42:54.235 INFO:teuthology.orchestra.run.vm01.stdout:68719476737 2026-03-10T06:42:54.307 INFO:tasks.cephadm.ceph_manager.ceph:need seq 68719476738 got 68719476737 for osd.2 2026-03-10T06:42:54.354 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/mon.a/config 2026-03-10T06:42:54.484 INFO:teuthology.orchestra.run.vm01.stdout:34359738372 2026-03-10T06:42:54.534 INFO:tasks.cephadm.ceph_manager.ceph:need seq 34359738373 got 34359738372 for osd.0 2026-03-10T06:42:54.577 INFO:teuthology.orchestra.run.vm01.stdout:51539607555 2026-03-10T06:42:54.620 INFO:tasks.cephadm.ceph_manager.ceph:need seq 51539607556 got 51539607555 for osd.1 2026-03-10T06:42:54.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:54 vm01 ceph-mon[50525]: pgmap v40: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:42:54.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:54 vm01 ceph-mon[50525]: from='client.? 192.168.123.101:0/2380071028' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T06:42:54.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:54 vm01 ceph-mon[50525]: from='client.? 192.168.123.101:0/3615680292' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T06:42:54.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:54 vm01 ceph-mon[50525]: from='client.? 192.168.123.101:0/4006029282' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T06:42:54.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:54 vm08 ceph-mon[52886]: pgmap v40: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:42:54.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:54 vm08 ceph-mon[52886]: from='client.? 192.168.123.101:0/2380071028' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T06:42:54.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:54 vm08 ceph-mon[52886]: from='client.? 192.168.123.101:0/3615680292' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T06:42:54.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:54 vm08 ceph-mon[52886]: from='client.? 192.168.123.101:0/4006029282' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T06:42:55.106 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:54 vm09 ceph-mon[55409]: pgmap v40: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:42:55.106 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:54 vm09 ceph-mon[55409]: from='client.? 192.168.123.101:0/2380071028' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T06:42:55.106 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:54 vm09 ceph-mon[55409]: from='client.? 192.168.123.101:0/3615680292' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T06:42:55.106 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:54 vm09 ceph-mon[55409]: from='client.? 192.168.123.101:0/4006029282' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T06:42:55.308 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e -- ceph osd last-stat-seq osd.2 2026-03-10T06:42:55.456 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/mon.a/config 2026-03-10T06:42:55.535 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e -- ceph osd last-stat-seq osd.0 2026-03-10T06:42:55.621 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e -- ceph osd last-stat-seq osd.1 2026-03-10T06:42:55.667 INFO:teuthology.orchestra.run.vm01.stdout:68719476738 2026-03-10T06:42:55.753 INFO:tasks.cephadm.ceph_manager.ceph:need seq 68719476738 got 68719476738 for osd.2 2026-03-10T06:42:55.754 DEBUG:teuthology.parallel:result is None 2026-03-10T06:42:55.764 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/mon.a/config 2026-03-10T06:42:55.882 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/mon.a/config 2026-03-10T06:42:55.924 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:55 vm01 ceph-mon[50525]: from='client.? 192.168.123.101:0/635032036' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T06:42:56.018 INFO:teuthology.orchestra.run.vm01.stdout:34359738374 2026-03-10T06:42:56.087 INFO:tasks.cephadm.ceph_manager.ceph:need seq 34359738373 got 34359738374 for osd.0 2026-03-10T06:42:56.087 DEBUG:teuthology.parallel:result is None 2026-03-10T06:42:56.106 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:55 vm09 ceph-mon[55409]: from='client.? 192.168.123.101:0/635032036' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T06:42:56.108 INFO:teuthology.orchestra.run.vm01.stdout:51539607556 2026-03-10T06:42:56.143 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:55 vm08 ceph-mon[52886]: from='client.? 192.168.123.101:0/635032036' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T06:42:56.148 INFO:tasks.cephadm.ceph_manager.ceph:need seq 51539607556 got 51539607556 for osd.1 2026-03-10T06:42:56.148 DEBUG:teuthology.parallel:result is None 2026-03-10T06:42:56.148 INFO:tasks.cephadm.ceph_manager.ceph:waiting for clean 2026-03-10T06:42:56.148 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e -- ceph pg dump --format=json 2026-03-10T06:42:56.344 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/mon.a/config 2026-03-10T06:42:56.545 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:42:56.545 INFO:teuthology.orchestra.run.vm01.stderr:dumped all 2026-03-10T06:42:56.589 INFO:teuthology.orchestra.run.vm01.stdout:{"pg_ready":true,"pg_map":{"version":41,"stamp":"2026-03-10T06:42:55.904928+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":3,"num_osds":3,"num_per_pool_osds":3,"num_per_pool_omap_osds":3,"kb":62902272,"kb_used":82724,"kb_used_data":1820,"kb_used_omap":4,"kb_used_meta":80443,"kb_avail":62819548,"statfs":{"total":64411926528,"available":64327217152,"internally_reserved":0,"allocated":1863680,"data_stored":1521660,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":4769,"internal_metadata":82373983},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"3.294094"},"pg_stats":[{"pgid":"1.0","version":"18'32","reported_seq":54,"reported_epoch":18,"state":"active+clean","last_fresh":"2026-03-10T06:42:51.816790+0000","last_change":"2026-03-10T06:42:51.607860+0000","last_active":"2026-03-10T06:42:51.816790+0000","last_peered":"2026-03-10T06:42:51.816790+0000","last_clean":"2026-03-10T06:42:51.816790+0000","last_became_active":"2026-03-10T06:42:51.607750+0000","last_became_peered":"2026-03-10T06:42:51.607750+0000","last_unstale":"2026-03-10T06:42:51.816790+0000","last_undegraded":"2026-03-10T06:42:51.816790+0000","last_fullsized":"2026-03-10T06:42:51.816790+0000","mapping_epoch":17,"log_start":"0'0","ondisk_log_start":"0'0","created":17,"last_epoch_clean":18,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T06:42:50.595865+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T06:42:50.595865+0000","last_clean_scrub_stamp":"2026-03-10T06:42:50.595865+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:41:52.841809+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,0],"acting":[1,2,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]}],"pool_stats":[{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":3}],"osd_stats":[{"osd":2,"up_from":16,"seq":68719476739,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27584,"kb_used_data":612,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939840,"statfs":{"total":21470642176,"available":21442396160,"internally_reserved":0,"allocated":626688,"data_stored":509738,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":12,"seq":51539607556,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27572,"kb_used_data":604,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939852,"statfs":{"total":21470642176,"available":21442408448,"internally_reserved":0,"allocated":618496,"data_stored":505961,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":8,"seq":34359738374,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27568,"kb_used_data":604,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939856,"statfs":{"total":21470642176,"available":21442412544,"internally_reserved":0,"allocated":618496,"data_stored":505961,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[1,2],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-10T06:42:56.590 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e -- ceph pg dump --format=json 2026-03-10T06:42:56.739 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/mon.a/config 2026-03-10T06:42:56.796 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:56 vm01 ceph-mon[50525]: pgmap v41: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:42:56.796 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:56 vm01 ceph-mon[50525]: from='client.? 192.168.123.101:0/541893314' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T06:42:56.796 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:56 vm01 ceph-mon[50525]: from='client.? 192.168.123.101:0/1050879321' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T06:42:56.935 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:42:56.935 INFO:teuthology.orchestra.run.vm01.stderr:dumped all 2026-03-10T06:42:56.993 INFO:teuthology.orchestra.run.vm01.stdout:{"pg_ready":true,"pg_map":{"version":41,"stamp":"2026-03-10T06:42:55.904928+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":3,"num_osds":3,"num_per_pool_osds":3,"num_per_pool_omap_osds":3,"kb":62902272,"kb_used":82724,"kb_used_data":1820,"kb_used_omap":4,"kb_used_meta":80443,"kb_avail":62819548,"statfs":{"total":64411926528,"available":64327217152,"internally_reserved":0,"allocated":1863680,"data_stored":1521660,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":4769,"internal_metadata":82373983},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"3.294094"},"pg_stats":[{"pgid":"1.0","version":"18'32","reported_seq":54,"reported_epoch":18,"state":"active+clean","last_fresh":"2026-03-10T06:42:51.816790+0000","last_change":"2026-03-10T06:42:51.607860+0000","last_active":"2026-03-10T06:42:51.816790+0000","last_peered":"2026-03-10T06:42:51.816790+0000","last_clean":"2026-03-10T06:42:51.816790+0000","last_became_active":"2026-03-10T06:42:51.607750+0000","last_became_peered":"2026-03-10T06:42:51.607750+0000","last_unstale":"2026-03-10T06:42:51.816790+0000","last_undegraded":"2026-03-10T06:42:51.816790+0000","last_fullsized":"2026-03-10T06:42:51.816790+0000","mapping_epoch":17,"log_start":"0'0","ondisk_log_start":"0'0","created":17,"last_epoch_clean":18,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T06:42:50.595865+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T06:42:50.595865+0000","last_clean_scrub_stamp":"2026-03-10T06:42:50.595865+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:41:52.841809+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,0],"acting":[1,2,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]}],"pool_stats":[{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":3}],"osd_stats":[{"osd":2,"up_from":16,"seq":68719476739,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27584,"kb_used_data":612,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939840,"statfs":{"total":21470642176,"available":21442396160,"internally_reserved":0,"allocated":626688,"data_stored":509738,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":12,"seq":51539607556,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27572,"kb_used_data":604,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939852,"statfs":{"total":21470642176,"available":21442408448,"internally_reserved":0,"allocated":618496,"data_stored":505961,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":8,"seq":34359738374,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27568,"kb_used_data":604,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939856,"statfs":{"total":21470642176,"available":21442412544,"internally_reserved":0,"allocated":618496,"data_stored":505961,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[1,2],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-10T06:42:56.993 INFO:tasks.cephadm.ceph_manager.ceph:clean! 2026-03-10T06:42:56.993 INFO:tasks.ceph:Waiting until ceph cluster ceph is healthy... 2026-03-10T06:42:56.993 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy 2026-03-10T06:42:56.993 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e -- ceph health --format=json 2026-03-10T06:42:57.106 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:56 vm09 ceph-mon[55409]: pgmap v41: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:42:57.106 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:56 vm09 ceph-mon[55409]: from='client.? 192.168.123.101:0/541893314' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T06:42:57.106 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:56 vm09 ceph-mon[55409]: from='client.? 192.168.123.101:0/1050879321' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T06:42:57.142 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/mon.a/config 2026-03-10T06:42:57.143 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:56 vm08 ceph-mon[52886]: pgmap v41: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:42:57.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:56 vm08 ceph-mon[52886]: from='client.? 192.168.123.101:0/541893314' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T06:42:57.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:56 vm08 ceph-mon[52886]: from='client.? 192.168.123.101:0/1050879321' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T06:42:57.365 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:42:57.365 INFO:teuthology.orchestra.run.vm01.stdout:{"status":"HEALTH_OK","checks":{},"mutes":[]} 2026-03-10T06:42:57.420 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy done 2026-03-10T06:42:57.420 INFO:tasks.cephadm:Setup complete, yielding 2026-03-10T06:42:57.420 INFO:teuthology.run_tasks:Running task cephadm.apply... 2026-03-10T06:42:57.422 INFO:tasks.cephadm:Applying spec(s): placement: count: 3 service_id: foo service_type: mon spec: crush_locations: host.a: - datacenter=a host.b: - datacenter=b - rack=2 host.c: - datacenter=a - rack=3 2026-03-10T06:42:57.422 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e -- ceph orch apply -i - 2026-03-10T06:42:57.572 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/mon.a/config 2026-03-10T06:42:57.788 INFO:teuthology.orchestra.run.vm01.stdout:Scheduled mon update... 2026-03-10T06:42:57.846 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-10T06:42:57.848 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm01.local 2026-03-10T06:42:57.848 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e -- bash -c 'set -ex 2026-03-10T06:42:57.849 DEBUG:teuthology.orchestra.run.vm01:> # since we don'"'"'t know the real hostnames before the test, the next 2026-03-10T06:42:57.849 DEBUG:teuthology.orchestra.run.vm01:> # bit is in order to replace the fake hostnames "host.a/b/c" with 2026-03-10T06:42:57.849 DEBUG:teuthology.orchestra.run.vm01:> # the actual names cephadm knows the host by within the mon spec 2026-03-10T06:42:57.849 DEBUG:teuthology.orchestra.run.vm01:> ceph orch host ls --format json | jq -r '"'"'.[] | .hostname'"'"' > realnames 2026-03-10T06:42:57.849 DEBUG:teuthology.orchestra.run.vm01:> echo $'"'"'host.a\nhost.b\nhost.c'"'"' > fakenames 2026-03-10T06:42:57.849 DEBUG:teuthology.orchestra.run.vm01:> echo $'"'"'a\nb\nc'"'"' > mon_ids 2026-03-10T06:42:57.849 DEBUG:teuthology.orchestra.run.vm01:> echo $'"'"'{datacenter=a}\n{datacenter=b,rack=2}\n{datacenter=a,rack=3}'"'"' > crush_locs 2026-03-10T06:42:57.849 DEBUG:teuthology.orchestra.run.vm01:> ceph orch ls --service-name mon --export > mon.yaml 2026-03-10T06:42:57.849 DEBUG:teuthology.orchestra.run.vm01:> MONSPEC=`cat mon.yaml` 2026-03-10T06:42:57.849 DEBUG:teuthology.orchestra.run.vm01:> echo "$MONSPEC" 2026-03-10T06:42:57.849 DEBUG:teuthology.orchestra.run.vm01:> while read realname <&3 && read fakename <&4; do 2026-03-10T06:42:57.849 DEBUG:teuthology.orchestra.run.vm01:> MONSPEC="${MONSPEC//$fakename/$realname}" 2026-03-10T06:42:57.849 DEBUG:teuthology.orchestra.run.vm01:> done 3 echo "$MONSPEC" > mon.yaml 2026-03-10T06:42:57.849 DEBUG:teuthology.orchestra.run.vm01:> cat mon.yaml 2026-03-10T06:42:57.849 DEBUG:teuthology.orchestra.run.vm01:> # now the spec should have the real hostnames, so let'"'"'s re-apply 2026-03-10T06:42:57.849 DEBUG:teuthology.orchestra.run.vm01:> ceph orch apply -i mon.yaml 2026-03-10T06:42:57.849 DEBUG:teuthology.orchestra.run.vm01:> sleep 90 2026-03-10T06:42:57.849 DEBUG:teuthology.orchestra.run.vm01:> ceph orch ps --refresh 2026-03-10T06:42:57.849 DEBUG:teuthology.orchestra.run.vm01:> ceph orch ls --service-name mon --export > mon.yaml; ceph orch apply -i mon.yaml 2026-03-10T06:42:57.849 DEBUG:teuthology.orchestra.run.vm01:> sleep 90 2026-03-10T06:42:57.849 DEBUG:teuthology.orchestra.run.vm01:> ceph mon dump 2026-03-10T06:42:57.849 DEBUG:teuthology.orchestra.run.vm01:> ceph mon dump --format json 2026-03-10T06:42:57.849 DEBUG:teuthology.orchestra.run.vm01:> # verify all the crush locations got set from "ceph mon dump" output 2026-03-10T06:42:57.849 DEBUG:teuthology.orchestra.run.vm01:> while read monid <&3 && read crushloc <&4; do 2026-03-10T06:42:57.849 DEBUG:teuthology.orchestra.run.vm01:> ceph mon dump --format json | jq --arg monid "$monid" --arg crushloc "$crushloc" -e '"'"'.mons | .[] | select(.name == $monid) | .crush_location == $crushloc'"'"' 2026-03-10T06:42:57.849 DEBUG:teuthology.orchestra.run.vm01:> done 3 ' 2026-03-10T06:42:57.876 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:57 vm01 ceph-mon[50525]: from='client.24260 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T06:42:57.876 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:42:57 vm01 ceph-mon[50525]: from='client.? 192.168.123.101:0/3723955840' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T06:42:58.041 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/mon.a/config 2026-03-10T06:42:58.106 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:57 vm09 ceph-mon[55409]: from='client.24260 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T06:42:58.106 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:42:57 vm09 ceph-mon[55409]: from='client.? 192.168.123.101:0/3723955840' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T06:42:58.112 INFO:teuthology.orchestra.run.vm01.stderr:+ ceph orch host ls --format json 2026-03-10T06:42:58.112 INFO:teuthology.orchestra.run.vm01.stderr:+ jq -r '.[] | .hostname' 2026-03-10T06:42:58.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:57 vm08 ceph-mon[52886]: from='client.24260 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T06:42:58.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:42:57 vm08 ceph-mon[52886]: from='client.? 192.168.123.101:0/3723955840' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T06:42:58.262 INFO:teuthology.orchestra.run.vm01.stderr:+ echo 'host.a 2026-03-10T06:42:58.262 INFO:teuthology.orchestra.run.vm01.stderr:host.b 2026-03-10T06:42:58.262 INFO:teuthology.orchestra.run.vm01.stderr:host.c' 2026-03-10T06:42:58.262 INFO:teuthology.orchestra.run.vm01.stderr:+ echo 'a 2026-03-10T06:42:58.262 INFO:teuthology.orchestra.run.vm01.stderr:b 2026-03-10T06:42:58.262 INFO:teuthology.orchestra.run.vm01.stderr:c' 2026-03-10T06:42:58.262 INFO:teuthology.orchestra.run.vm01.stderr:+ echo '{datacenter=a} 2026-03-10T06:42:58.262 INFO:teuthology.orchestra.run.vm01.stderr:{datacenter=b,rack=2} 2026-03-10T06:42:58.262 INFO:teuthology.orchestra.run.vm01.stderr:{datacenter=a,rack=3}' 2026-03-10T06:42:58.262 INFO:teuthology.orchestra.run.vm01.stderr:+ ceph orch ls --service-name mon --export 2026-03-10T06:42:58.413 INFO:teuthology.orchestra.run.vm01.stderr:++ cat mon.yaml 2026-03-10T06:42:58.414 INFO:teuthology.orchestra.run.vm01.stderr:+ MONSPEC='service_type: mon 2026-03-10T06:42:58.414 INFO:teuthology.orchestra.run.vm01.stderr:service_name: mon 2026-03-10T06:42:58.414 INFO:teuthology.orchestra.run.vm01.stderr:placement: 2026-03-10T06:42:58.414 INFO:teuthology.orchestra.run.vm01.stderr: count: 3 2026-03-10T06:42:58.414 INFO:teuthology.orchestra.run.vm01.stderr:spec: 2026-03-10T06:42:58.414 INFO:teuthology.orchestra.run.vm01.stderr: crush_locations: 2026-03-10T06:42:58.414 INFO:teuthology.orchestra.run.vm01.stderr: host.a: 2026-03-10T06:42:58.414 INFO:teuthology.orchestra.run.vm01.stderr: - datacenter=a 2026-03-10T06:42:58.414 INFO:teuthology.orchestra.run.vm01.stderr: host.b: 2026-03-10T06:42:58.414 INFO:teuthology.orchestra.run.vm01.stderr: - datacenter=b 2026-03-10T06:42:58.414 INFO:teuthology.orchestra.run.vm01.stderr: - rack=2 2026-03-10T06:42:58.414 INFO:teuthology.orchestra.run.vm01.stderr: host.c: 2026-03-10T06:42:58.414 INFO:teuthology.orchestra.run.vm01.stderr: - datacenter=a 2026-03-10T06:42:58.414 INFO:teuthology.orchestra.run.vm01.stderr: - rack=3' 2026-03-10T06:42:58.414 INFO:teuthology.orchestra.run.vm01.stdout:service_type: mon 2026-03-10T06:42:58.414 INFO:teuthology.orchestra.run.vm01.stdout:service_name: mon 2026-03-10T06:42:58.414 INFO:teuthology.orchestra.run.vm01.stdout:placement: 2026-03-10T06:42:58.414 INFO:teuthology.orchestra.run.vm01.stdout: count: 3 2026-03-10T06:42:58.414 INFO:teuthology.orchestra.run.vm01.stdout:spec: 2026-03-10T06:42:58.414 INFO:teuthology.orchestra.run.vm01.stdout: crush_locations: 2026-03-10T06:42:58.414 INFO:teuthology.orchestra.run.vm01.stdout: host.a: 2026-03-10T06:42:58.414 INFO:teuthology.orchestra.run.vm01.stdout: - datacenter=a 2026-03-10T06:42:58.414 INFO:teuthology.orchestra.run.vm01.stdout: host.b: 2026-03-10T06:42:58.415 INFO:teuthology.orchestra.run.vm01.stdout: - datacenter=b 2026-03-10T06:42:58.415 INFO:teuthology.orchestra.run.vm01.stdout: - rack=2 2026-03-10T06:42:58.415 INFO:teuthology.orchestra.run.vm01.stdout: host.c: 2026-03-10T06:42:58.415 INFO:teuthology.orchestra.run.vm01.stdout: - datacenter=a 2026-03-10T06:42:58.415 INFO:teuthology.orchestra.run.vm01.stdout: - rack=3 2026-03-10T06:42:58.416 INFO:teuthology.orchestra.run.vm01.stderr:+ echo 'service_type: mon 2026-03-10T06:42:58.416 INFO:teuthology.orchestra.run.vm01.stderr:service_name: mon 2026-03-10T06:42:58.416 INFO:teuthology.orchestra.run.vm01.stderr:placement: 2026-03-10T06:42:58.416 INFO:teuthology.orchestra.run.vm01.stderr: count: 3 2026-03-10T06:42:58.416 INFO:teuthology.orchestra.run.vm01.stderr:spec: 2026-03-10T06:42:58.416 INFO:teuthology.orchestra.run.vm01.stderr: crush_locations: 2026-03-10T06:42:58.416 INFO:teuthology.orchestra.run.vm01.stderr: host.a: 2026-03-10T06:42:58.416 INFO:teuthology.orchestra.run.vm01.stderr: - datacenter=a 2026-03-10T06:42:58.416 INFO:teuthology.orchestra.run.vm01.stderr: host.b: 2026-03-10T06:42:58.416 INFO:teuthology.orchestra.run.vm01.stderr: - datacenter=b 2026-03-10T06:42:58.416 INFO:teuthology.orchestra.run.vm01.stderr: - rack=2 2026-03-10T06:42:58.416 INFO:teuthology.orchestra.run.vm01.stderr: host.c: 2026-03-10T06:42:58.416 INFO:teuthology.orchestra.run.vm01.stderr: - datacenter=a 2026-03-10T06:42:58.416 INFO:teuthology.orchestra.run.vm01.stderr: - rack=3' 2026-03-10T06:42:58.416 INFO:teuthology.orchestra.run.vm01.stderr:+ read realname 2026-03-10T06:42:58.416 INFO:teuthology.orchestra.run.vm01.stderr:+ read fakename 2026-03-10T06:42:58.416 INFO:teuthology.orchestra.run.vm01.stderr:+ MONSPEC='service_type: mon 2026-03-10T06:42:58.416 INFO:teuthology.orchestra.run.vm01.stderr:service_name: mon 2026-03-10T06:42:58.416 INFO:teuthology.orchestra.run.vm01.stderr:placement: 2026-03-10T06:42:58.416 INFO:teuthology.orchestra.run.vm01.stderr: count: 3 2026-03-10T06:42:58.416 INFO:teuthology.orchestra.run.vm01.stderr:spec: 2026-03-10T06:42:58.416 INFO:teuthology.orchestra.run.vm01.stderr: crush_locations: 2026-03-10T06:42:58.416 INFO:teuthology.orchestra.run.vm01.stderr: vm01: 2026-03-10T06:42:58.416 INFO:teuthology.orchestra.run.vm01.stderr: - datacenter=a 2026-03-10T06:42:58.416 INFO:teuthology.orchestra.run.vm01.stderr: host.b: 2026-03-10T06:42:58.416 INFO:teuthology.orchestra.run.vm01.stderr: - datacenter=b 2026-03-10T06:42:58.416 INFO:teuthology.orchestra.run.vm01.stderr: - rack=2 2026-03-10T06:42:58.416 INFO:teuthology.orchestra.run.vm01.stderr: host.c: 2026-03-10T06:42:58.416 INFO:teuthology.orchestra.run.vm01.stderr: - datacenter=a 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stderr: - rack=3' 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stderr:+ read realname 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stderr:+ read fakename 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stderr:+ MONSPEC='service_type: mon 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stderr:service_name: mon 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stderr:placement: 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stderr: count: 3 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stderr:spec: 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stderr: crush_locations: 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stderr: vm01: 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stderr: - datacenter=a 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stderr: vm08: 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stderr: - datacenter=b 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stderr: - rack=2 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stderr: host.c: 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stderr: - datacenter=a 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stderr: - rack=3' 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stderr:+ read realname 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stderr:+ read fakename 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stderr:+ MONSPEC='service_type: mon 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stderr:service_name: mon 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stderr:placement: 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stderr: count: 3 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stderr:spec: 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stderr: crush_locations: 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stderr: vm01: 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stderr: - datacenter=a 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stderr: vm08: 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stderr: - datacenter=b 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stderr: - rack=2 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stderr: vm09: 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stderr: - datacenter=a 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stderr: - rack=3' 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stderr:+ read realname 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stderr:+ echo 'service_type: mon 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stderr:service_name: mon 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stderr:placement: 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stderr: count: 3 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stderr:spec: 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stderr: crush_locations: 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stderr: vm01: 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stderr: - datacenter=a 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stderr: vm08: 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stderr: - datacenter=b 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stderr: - rack=2 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stderr: vm09: 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stderr: - datacenter=a 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stderr: - rack=3' 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stderr:+ cat mon.yaml 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stdout:service_type: mon 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stdout:service_name: mon 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stdout:placement: 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stdout: count: 3 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stdout:spec: 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stdout: crush_locations: 2026-03-10T06:42:58.417 INFO:teuthology.orchestra.run.vm01.stdout: vm01: 2026-03-10T06:42:58.418 INFO:teuthology.orchestra.run.vm01.stdout: - datacenter=a 2026-03-10T06:42:58.418 INFO:teuthology.orchestra.run.vm01.stdout: vm08: 2026-03-10T06:42:58.418 INFO:teuthology.orchestra.run.vm01.stdout: - datacenter=b 2026-03-10T06:42:58.418 INFO:teuthology.orchestra.run.vm01.stdout: - rack=2 2026-03-10T06:42:58.418 INFO:teuthology.orchestra.run.vm01.stdout: vm09: 2026-03-10T06:42:58.418 INFO:teuthology.orchestra.run.vm01.stdout: - datacenter=a 2026-03-10T06:42:58.418 INFO:teuthology.orchestra.run.vm01.stdout: - rack=3 2026-03-10T06:42:58.418 INFO:teuthology.orchestra.run.vm01.stderr:+ ceph orch apply -i mon.yaml 2026-03-10T06:42:58.566 INFO:teuthology.orchestra.run.vm01.stdout:Scheduled mon update... 2026-03-10T06:42:58.581 INFO:teuthology.orchestra.run.vm01.stderr:+ sleep 90 2026-03-10T06:43:14.097 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:13 vm01 ceph-mon[50525]: purged_snaps scrub ok 2026-03-10T06:43:14.097 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:13 vm01 ceph-mon[50525]: mon.b calling monitor election 2026-03-10T06:43:14.097 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:13 vm01 ceph-mon[50525]: mon.b calling monitor election 2026-03-10T06:43:14.097 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:13 vm01 ceph-mon[50525]: Setting crush location for mon c to {datacenter=a,rack=3} 2026-03-10T06:43:14.097 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:13 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd='[{"prefix": "mon set_location", "name": "c", "args": ["datacenter=a", "rack=3"]}]': finished 2026-03-10T06:43:14.097 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:13 vm01 ceph-mon[50525]: mon.c calling monitor election 2026-03-10T06:43:14.097 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:13 vm01 ceph-mon[50525]: mon.a calling monitor election 2026-03-10T06:43:14.097 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:13 vm01 ceph-mon[50525]: mon.b calling monitor election 2026-03-10T06:43:14.097 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:13 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T06:43:14.097 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:13 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T06:43:14.097 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:13 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T06:43:14.097 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:13 vm01 ceph-mon[50525]: pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:14.097 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:13 vm01 ceph-mon[50525]: pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:14.097 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:13 vm01 ceph-mon[50525]: mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-10T06:43:14.097 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:13 vm01 ceph-mon[50525]: Health detail: HEALTH_WARN 1/3 mons down, quorum a,c 2026-03-10T06:43:14.097 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:13 vm01 ceph-mon[50525]: [WRN] MON_DOWN: 1/3 mons down, quorum a,c 2026-03-10T06:43:14.097 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:13 vm01 ceph-mon[50525]: mon.b (rank 2) addr [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] is down (out of quorum) 2026-03-10T06:43:14.097 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:13 vm01 ceph-mon[50525]: mon.a calling monitor election 2026-03-10T06:43:14.097 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:13 vm01 ceph-mon[50525]: mon.c calling monitor election 2026-03-10T06:43:14.097 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:13 vm01 ceph-mon[50525]: mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T06:43:14.097 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:13 vm01 ceph-mon[50525]: monmap epoch 6 2026-03-10T06:43:14.097 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:13 vm01 ceph-mon[50525]: fsid 0204d884-1c4c-11f1-a017-9957fb527c0e 2026-03-10T06:43:14.098 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:13 vm01 ceph-mon[50525]: last_changed 2026-03-10T06:43:08.604378+0000 2026-03-10T06:43:14.098 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:13 vm01 ceph-mon[50525]: created 2026-03-10T06:41:30.998723+0000 2026-03-10T06:43:14.098 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:13 vm01 ceph-mon[50525]: min_mon_release 19 (squid) 2026-03-10T06:43:14.098 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:13 vm01 ceph-mon[50525]: election_strategy: 1 2026-03-10T06:43:14.098 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:13 vm01 ceph-mon[50525]: 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a; crush_location {datacenter=a} 2026-03-10T06:43:14.098 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:13 vm01 ceph-mon[50525]: 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.c; crush_location {datacenter=a,rack=3} 2026-03-10T06:43:14.098 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:13 vm01 ceph-mon[50525]: 2: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.b; crush_location {datacenter=b,rack=2} 2026-03-10T06:43:14.098 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:13 vm01 ceph-mon[50525]: fsmap 2026-03-10T06:43:14.098 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:13 vm01 ceph-mon[50525]: osdmap e19: 3 total, 3 up, 3 in 2026-03-10T06:43:14.098 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:13 vm01 ceph-mon[50525]: mgrmap e15: a(active, since 81s), standbys: b 2026-03-10T06:43:14.098 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:13 vm01 ceph-mon[50525]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum a,c) 2026-03-10T06:43:14.098 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:13 vm01 ceph-mon[50525]: Cluster is now healthy 2026-03-10T06:43:14.098 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:13 vm01 ceph-mon[50525]: overall HEALTH_OK 2026-03-10T06:43:14.098 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:13 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:14.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:13 vm09 ceph-mon[55409]: purged_snaps scrub ok 2026-03-10T06:43:14.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:13 vm09 ceph-mon[55409]: mon.b calling monitor election 2026-03-10T06:43:14.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:13 vm09 ceph-mon[55409]: mon.b calling monitor election 2026-03-10T06:43:14.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:13 vm09 ceph-mon[55409]: Setting crush location for mon c to {datacenter=a,rack=3} 2026-03-10T06:43:14.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:13 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd='[{"prefix": "mon set_location", "name": "c", "args": ["datacenter=a", "rack=3"]}]': finished 2026-03-10T06:43:14.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:13 vm09 ceph-mon[55409]: mon.c calling monitor election 2026-03-10T06:43:14.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:13 vm09 ceph-mon[55409]: mon.a calling monitor election 2026-03-10T06:43:14.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:13 vm09 ceph-mon[55409]: mon.b calling monitor election 2026-03-10T06:43:14.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:13 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T06:43:14.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:13 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T06:43:14.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:13 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T06:43:14.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:13 vm09 ceph-mon[55409]: pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:14.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:13 vm09 ceph-mon[55409]: pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:14.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:13 vm09 ceph-mon[55409]: mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-10T06:43:14.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:13 vm09 ceph-mon[55409]: Health detail: HEALTH_WARN 1/3 mons down, quorum a,c 2026-03-10T06:43:14.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:13 vm09 ceph-mon[55409]: [WRN] MON_DOWN: 1/3 mons down, quorum a,c 2026-03-10T06:43:14.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:13 vm09 ceph-mon[55409]: mon.b (rank 2) addr [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] is down (out of quorum) 2026-03-10T06:43:14.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:13 vm09 ceph-mon[55409]: mon.a calling monitor election 2026-03-10T06:43:14.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:13 vm09 ceph-mon[55409]: mon.c calling monitor election 2026-03-10T06:43:14.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:13 vm09 ceph-mon[55409]: mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T06:43:14.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:13 vm09 ceph-mon[55409]: monmap epoch 6 2026-03-10T06:43:14.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:13 vm09 ceph-mon[55409]: fsid 0204d884-1c4c-11f1-a017-9957fb527c0e 2026-03-10T06:43:14.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:13 vm09 ceph-mon[55409]: last_changed 2026-03-10T06:43:08.604378+0000 2026-03-10T06:43:14.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:13 vm09 ceph-mon[55409]: created 2026-03-10T06:41:30.998723+0000 2026-03-10T06:43:14.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:13 vm09 ceph-mon[55409]: min_mon_release 19 (squid) 2026-03-10T06:43:14.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:13 vm09 ceph-mon[55409]: election_strategy: 1 2026-03-10T06:43:14.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:13 vm09 ceph-mon[55409]: 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a; crush_location {datacenter=a} 2026-03-10T06:43:14.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:13 vm09 ceph-mon[55409]: 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.c; crush_location {datacenter=a,rack=3} 2026-03-10T06:43:14.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:13 vm09 ceph-mon[55409]: 2: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.b; crush_location {datacenter=b,rack=2} 2026-03-10T06:43:14.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:13 vm09 ceph-mon[55409]: fsmap 2026-03-10T06:43:14.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:13 vm09 ceph-mon[55409]: osdmap e19: 3 total, 3 up, 3 in 2026-03-10T06:43:14.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:13 vm09 ceph-mon[55409]: mgrmap e15: a(active, since 81s), standbys: b 2026-03-10T06:43:14.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:13 vm09 ceph-mon[55409]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum a,c) 2026-03-10T06:43:14.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:13 vm09 ceph-mon[55409]: Cluster is now healthy 2026-03-10T06:43:14.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:13 vm09 ceph-mon[55409]: overall HEALTH_OK 2026-03-10T06:43:14.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:13 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:14.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:13 vm08 ceph-mon[52886]: purged_snaps scrub ok 2026-03-10T06:43:14.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:13 vm08 ceph-mon[52886]: mon.b calling monitor election 2026-03-10T06:43:14.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:13 vm08 ceph-mon[52886]: mon.b calling monitor election 2026-03-10T06:43:14.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:13 vm08 ceph-mon[52886]: Setting crush location for mon c to {datacenter=a,rack=3} 2026-03-10T06:43:14.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:13 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd='[{"prefix": "mon set_location", "name": "c", "args": ["datacenter=a", "rack=3"]}]': finished 2026-03-10T06:43:14.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:13 vm08 ceph-mon[52886]: mon.c calling monitor election 2026-03-10T06:43:14.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:13 vm08 ceph-mon[52886]: mon.a calling monitor election 2026-03-10T06:43:14.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:13 vm08 ceph-mon[52886]: mon.b calling monitor election 2026-03-10T06:43:14.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:13 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T06:43:14.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:13 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T06:43:14.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:13 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T06:43:14.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:13 vm08 ceph-mon[52886]: pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:14.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:13 vm08 ceph-mon[52886]: pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:14.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:13 vm08 ceph-mon[52886]: mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-10T06:43:14.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:13 vm08 ceph-mon[52886]: Health detail: HEALTH_WARN 1/3 mons down, quorum a,c 2026-03-10T06:43:14.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:13 vm08 ceph-mon[52886]: [WRN] MON_DOWN: 1/3 mons down, quorum a,c 2026-03-10T06:43:14.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:13 vm08 ceph-mon[52886]: mon.b (rank 2) addr [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] is down (out of quorum) 2026-03-10T06:43:14.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:13 vm08 ceph-mon[52886]: mon.a calling monitor election 2026-03-10T06:43:14.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:13 vm08 ceph-mon[52886]: mon.c calling monitor election 2026-03-10T06:43:14.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:13 vm08 ceph-mon[52886]: mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T06:43:14.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:13 vm08 ceph-mon[52886]: monmap epoch 6 2026-03-10T06:43:14.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:13 vm08 ceph-mon[52886]: fsid 0204d884-1c4c-11f1-a017-9957fb527c0e 2026-03-10T06:43:14.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:13 vm08 ceph-mon[52886]: last_changed 2026-03-10T06:43:08.604378+0000 2026-03-10T06:43:14.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:13 vm08 ceph-mon[52886]: created 2026-03-10T06:41:30.998723+0000 2026-03-10T06:43:14.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:13 vm08 ceph-mon[52886]: min_mon_release 19 (squid) 2026-03-10T06:43:14.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:13 vm08 ceph-mon[52886]: election_strategy: 1 2026-03-10T06:43:14.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:13 vm08 ceph-mon[52886]: 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a; crush_location {datacenter=a} 2026-03-10T06:43:14.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:13 vm08 ceph-mon[52886]: 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.c; crush_location {datacenter=a,rack=3} 2026-03-10T06:43:14.145 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:13 vm08 ceph-mon[52886]: 2: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.b; crush_location {datacenter=b,rack=2} 2026-03-10T06:43:14.145 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:13 vm08 ceph-mon[52886]: fsmap 2026-03-10T06:43:14.145 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:13 vm08 ceph-mon[52886]: osdmap e19: 3 total, 3 up, 3 in 2026-03-10T06:43:14.145 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:13 vm08 ceph-mon[52886]: mgrmap e15: a(active, since 81s), standbys: b 2026-03-10T06:43:14.145 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:13 vm08 ceph-mon[52886]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum a,c) 2026-03-10T06:43:14.145 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:13 vm08 ceph-mon[52886]: Cluster is now healthy 2026-03-10T06:43:14.145 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:13 vm08 ceph-mon[52886]: overall HEALTH_OK 2026-03-10T06:43:14.145 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:13 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:14.875 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:14 vm08 ceph-mon[52886]: Reconfiguring mon.a (monmap changed)... 2026-03-10T06:43:14.875 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:14 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T06:43:14.875 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:14 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T06:43:14.875 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:14 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:43:14.875 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:14 vm08 ceph-mon[52886]: Reconfiguring daemon mon.a on vm01 2026-03-10T06:43:14.875 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:14 vm08 ceph-mon[52886]: pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:14.875 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:14 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:14.875 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:14 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:14.875 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:14 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.a", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T06:43:14.875 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:14 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T06:43:14.875 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:14 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:43:14.875 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:14 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:14.875 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:14 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:14.875 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:14 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T06:43:14.875 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:14 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:43:14.875 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:14 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:14.875 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:14 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:14.875 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:14 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T06:43:14.875 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:14 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T06:43:14.875 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:14 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:43:15.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:14 vm09 ceph-mon[55409]: Reconfiguring mon.a (monmap changed)... 2026-03-10T06:43:15.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:14 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T06:43:15.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:14 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T06:43:15.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:14 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:43:15.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:14 vm09 ceph-mon[55409]: Reconfiguring daemon mon.a on vm01 2026-03-10T06:43:15.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:14 vm09 ceph-mon[55409]: pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:15.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:14 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:15.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:14 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:15.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:14 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.a", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T06:43:15.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:14 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T06:43:15.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:14 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:43:15.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:14 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:15.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:14 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:15.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:14 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T06:43:15.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:14 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:43:15.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:14 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:15.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:14 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:15.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:14 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T06:43:15.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:14 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T06:43:15.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:14 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:43:15.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:14 vm01 ceph-mon[50525]: Reconfiguring mon.a (monmap changed)... 2026-03-10T06:43:15.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:14 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T06:43:15.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:14 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T06:43:15.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:14 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:43:15.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:14 vm01 ceph-mon[50525]: Reconfiguring daemon mon.a on vm01 2026-03-10T06:43:15.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:14 vm01 ceph-mon[50525]: pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:15.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:14 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:15.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:14 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:15.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:14 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.a", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T06:43:15.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:14 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T06:43:15.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:14 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:43:15.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:14 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:15.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:14 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:15.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:14 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T06:43:15.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:14 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:43:15.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:14 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:15.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:14 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:15.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:14 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T06:43:15.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:14 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T06:43:15.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:14 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:43:15.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:15 vm08 ceph-mon[52886]: Reconfiguring mgr.a (monmap changed)... 2026-03-10T06:43:15.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:15 vm08 ceph-mon[52886]: Reconfiguring daemon mgr.a on vm01 2026-03-10T06:43:15.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:15 vm08 ceph-mon[52886]: Reconfiguring osd.0 (monmap changed)... 2026-03-10T06:43:15.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:15 vm08 ceph-mon[52886]: Reconfiguring daemon osd.0 on vm01 2026-03-10T06:43:15.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:15 vm08 ceph-mon[52886]: Reconfiguring mon.b (monmap changed)... 2026-03-10T06:43:15.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:15 vm08 ceph-mon[52886]: Reconfiguring daemon mon.b on vm08 2026-03-10T06:43:15.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:15 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:15.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:15 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:15.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:15 vm08 ceph-mon[52886]: Reconfiguring mgr.b (monmap changed)... 2026-03-10T06:43:15.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:15 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T06:43:15.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:15 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T06:43:15.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:15 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:43:15.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:15 vm08 ceph-mon[52886]: Reconfiguring daemon mgr.b on vm08 2026-03-10T06:43:15.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:15 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:15.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:15 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:15.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:15 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T06:43:15.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:15 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:43:15.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:15 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:15.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:15 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:15.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:15 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T06:43:15.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:15 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T06:43:15.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:15 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:43:15.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:15 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:15.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:15 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:15.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:15 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T06:43:15.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:15 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:43:16.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:15 vm09 ceph-mon[55409]: Reconfiguring mgr.a (monmap changed)... 2026-03-10T06:43:16.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:15 vm09 ceph-mon[55409]: Reconfiguring daemon mgr.a on vm01 2026-03-10T06:43:16.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:15 vm09 ceph-mon[55409]: Reconfiguring osd.0 (monmap changed)... 2026-03-10T06:43:16.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:15 vm09 ceph-mon[55409]: Reconfiguring daemon osd.0 on vm01 2026-03-10T06:43:16.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:15 vm09 ceph-mon[55409]: Reconfiguring mon.b (monmap changed)... 2026-03-10T06:43:16.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:15 vm09 ceph-mon[55409]: Reconfiguring daemon mon.b on vm08 2026-03-10T06:43:16.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:15 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:16.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:15 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:16.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:15 vm09 ceph-mon[55409]: Reconfiguring mgr.b (monmap changed)... 2026-03-10T06:43:16.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:15 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T06:43:16.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:15 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T06:43:16.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:15 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:43:16.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:15 vm09 ceph-mon[55409]: Reconfiguring daemon mgr.b on vm08 2026-03-10T06:43:16.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:15 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:16.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:15 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:16.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:15 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T06:43:16.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:15 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:43:16.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:15 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:16.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:15 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:16.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:15 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T06:43:16.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:15 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T06:43:16.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:15 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:43:16.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:15 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:16.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:15 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:16.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:15 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T06:43:16.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:15 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:43:16.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:15 vm01 ceph-mon[50525]: Reconfiguring mgr.a (monmap changed)... 2026-03-10T06:43:16.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:15 vm01 ceph-mon[50525]: Reconfiguring daemon mgr.a on vm01 2026-03-10T06:43:16.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:15 vm01 ceph-mon[50525]: Reconfiguring osd.0 (monmap changed)... 2026-03-10T06:43:16.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:15 vm01 ceph-mon[50525]: Reconfiguring daemon osd.0 on vm01 2026-03-10T06:43:16.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:15 vm01 ceph-mon[50525]: Reconfiguring mon.b (monmap changed)... 2026-03-10T06:43:16.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:15 vm01 ceph-mon[50525]: Reconfiguring daemon mon.b on vm08 2026-03-10T06:43:16.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:15 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:16.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:15 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:16.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:15 vm01 ceph-mon[50525]: Reconfiguring mgr.b (monmap changed)... 2026-03-10T06:43:16.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:15 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T06:43:16.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:15 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T06:43:16.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:15 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:43:16.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:15 vm01 ceph-mon[50525]: Reconfiguring daemon mgr.b on vm08 2026-03-10T06:43:16.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:15 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:16.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:15 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:16.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:15 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T06:43:16.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:15 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:43:16.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:15 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:16.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:15 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:16.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:15 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T06:43:16.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:15 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T06:43:16.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:15 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:43:16.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:15 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:16.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:15 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:16.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:15 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T06:43:16.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:15 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:43:17.106 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:16 vm09 ceph-mon[55409]: Reconfiguring osd.1 (monmap changed)... 2026-03-10T06:43:17.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:16 vm09 ceph-mon[55409]: Reconfiguring daemon osd.1 on vm08 2026-03-10T06:43:17.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:16 vm09 ceph-mon[55409]: Reconfiguring mon.c (monmap changed)... 2026-03-10T06:43:17.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:16 vm09 ceph-mon[55409]: Reconfiguring daemon mon.c on vm09 2026-03-10T06:43:17.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:16 vm09 ceph-mon[55409]: Reconfiguring osd.2 (monmap changed)... 2026-03-10T06:43:17.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:16 vm09 ceph-mon[55409]: Reconfiguring daemon osd.2 on vm09 2026-03-10T06:43:17.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:16 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:17.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:16 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:17.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:16 vm09 ceph-mon[55409]: pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:17.110 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:16 vm01 ceph-mon[50525]: Reconfiguring osd.1 (monmap changed)... 2026-03-10T06:43:17.110 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:16 vm01 ceph-mon[50525]: Reconfiguring daemon osd.1 on vm08 2026-03-10T06:43:17.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:16 vm01 ceph-mon[50525]: Reconfiguring mon.c (monmap changed)... 2026-03-10T06:43:17.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:16 vm01 ceph-mon[50525]: Reconfiguring daemon mon.c on vm09 2026-03-10T06:43:17.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:16 vm01 ceph-mon[50525]: Reconfiguring osd.2 (monmap changed)... 2026-03-10T06:43:17.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:16 vm01 ceph-mon[50525]: Reconfiguring daemon osd.2 on vm09 2026-03-10T06:43:17.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:16 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:17.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:16 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:17.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:16 vm01 ceph-mon[50525]: pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:17.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:16 vm08 ceph-mon[52886]: Reconfiguring osd.1 (monmap changed)... 2026-03-10T06:43:17.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:16 vm08 ceph-mon[52886]: Reconfiguring daemon osd.1 on vm08 2026-03-10T06:43:17.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:16 vm08 ceph-mon[52886]: Reconfiguring mon.c (monmap changed)... 2026-03-10T06:43:17.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:16 vm08 ceph-mon[52886]: Reconfiguring daemon mon.c on vm09 2026-03-10T06:43:17.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:16 vm08 ceph-mon[52886]: Reconfiguring osd.2 (monmap changed)... 2026-03-10T06:43:17.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:16 vm08 ceph-mon[52886]: Reconfiguring daemon osd.2 on vm09 2026-03-10T06:43:17.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:16 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:17.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:16 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:43:17.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:16 vm08 ceph-mon[52886]: pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:18.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:17 vm09 ceph-mon[55409]: pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:18.360 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:17 vm01 ceph-mon[50525]: pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:18.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:17 vm08 ceph-mon[52886]: pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:20.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:19 vm09 ceph-mon[55409]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:20.360 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:19 vm01 ceph-mon[50525]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:20.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:19 vm08 ceph-mon[52886]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:22.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:21 vm09 ceph-mon[55409]: pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:22.360 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:21 vm01 ceph-mon[50525]: pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:22.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:21 vm08 ceph-mon[52886]: pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:24.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:23 vm09 ceph-mon[55409]: pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:24.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:23 vm01 ceph-mon[50525]: pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:24.393 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:23 vm08 ceph-mon[52886]: pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:26.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:25 vm09 ceph-mon[55409]: pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:26.360 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:25 vm01 ceph-mon[50525]: pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:26.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:25 vm08 ceph-mon[52886]: pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:28.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:27 vm09 ceph-mon[55409]: pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:28.360 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:27 vm01 ceph-mon[50525]: pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:28.393 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:27 vm08 ceph-mon[52886]: pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:30.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:29 vm09 ceph-mon[55409]: pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:30.360 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:29 vm01 ceph-mon[50525]: pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:30.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:29 vm08 ceph-mon[52886]: pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:32.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:31 vm09 ceph-mon[55409]: pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:32.360 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:31 vm01 ceph-mon[50525]: pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:32.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:31 vm08 ceph-mon[52886]: pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:34.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:33 vm09 ceph-mon[55409]: pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:34.360 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:33 vm01 ceph-mon[50525]: pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:34.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:33 vm08 ceph-mon[52886]: pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:36.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:35 vm09 ceph-mon[55409]: pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:36.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:35 vm01 ceph-mon[50525]: pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:36.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:35 vm08 ceph-mon[52886]: pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:38.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:38 vm09 ceph-mon[55409]: pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:38.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:38 vm01 ceph-mon[50525]: pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:38.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:38 vm08 ceph-mon[52886]: pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:40.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:39 vm09 ceph-mon[55409]: pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:40.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:39 vm01 ceph-mon[50525]: pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:40.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:39 vm08 ceph-mon[52886]: pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:42.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:41 vm09 ceph-mon[55409]: pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:42.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:41 vm01 ceph-mon[50525]: pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:42.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:41 vm08 ceph-mon[52886]: pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:44.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:43 vm09 ceph-mon[55409]: pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:44.360 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:43 vm01 ceph-mon[50525]: pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:44.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:43 vm08 ceph-mon[52886]: pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:46.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:45 vm09 ceph-mon[55409]: pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:46.360 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:45 vm01 ceph-mon[50525]: pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:46.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:45 vm08 ceph-mon[52886]: pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:48.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:47 vm09 ceph-mon[55409]: pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:48.360 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:47 vm01 ceph-mon[50525]: pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:48.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:47 vm08 ceph-mon[52886]: pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:50.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:49 vm09 ceph-mon[55409]: pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:50.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:49 vm01 ceph-mon[50525]: pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:50.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:49 vm08 ceph-mon[52886]: pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:52.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:51 vm09 ceph-mon[55409]: pgmap v69: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:52.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:51 vm01 ceph-mon[50525]: pgmap v69: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:52.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:51 vm08 ceph-mon[52886]: pgmap v69: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:54.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:53 vm09 ceph-mon[55409]: pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:54.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:53 vm01 ceph-mon[50525]: pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:54.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:53 vm08 ceph-mon[52886]: pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:56.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:55 vm09 ceph-mon[55409]: pgmap v71: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:56.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:55 vm01 ceph-mon[50525]: pgmap v71: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:56.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:55 vm08 ceph-mon[52886]: pgmap v71: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:58.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:57 vm09 ceph-mon[55409]: pgmap v72: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:58.360 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:57 vm01 ceph-mon[50525]: pgmap v72: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:43:58.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:57 vm08 ceph-mon[52886]: pgmap v72: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:00.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:43:59 vm09 ceph-mon[55409]: pgmap v73: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:00.360 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:43:59 vm01 ceph-mon[50525]: pgmap v73: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:00.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:43:59 vm08 ceph-mon[52886]: pgmap v73: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:02.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:01 vm09 ceph-mon[55409]: pgmap v74: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:02.360 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:01 vm01 ceph-mon[50525]: pgmap v74: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:02.393 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:01 vm08 ceph-mon[52886]: pgmap v74: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:04.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:03 vm09 ceph-mon[55409]: pgmap v75: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:04.360 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:03 vm01 ceph-mon[50525]: pgmap v75: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:04.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:03 vm08 ceph-mon[52886]: pgmap v75: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:06.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:05 vm09 ceph-mon[55409]: pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:06.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:05 vm01 ceph-mon[50525]: pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:06.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:05 vm08 ceph-mon[52886]: pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:08.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:07 vm09 ceph-mon[55409]: pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:08.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:07 vm01 ceph-mon[50525]: pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:08.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:07 vm08 ceph-mon[52886]: pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:10.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:09 vm09 ceph-mon[55409]: pgmap v78: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:10.360 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:09 vm01 ceph-mon[50525]: pgmap v78: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:10.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:09 vm08 ceph-mon[52886]: pgmap v78: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:12.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:11 vm09 ceph-mon[55409]: pgmap v79: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:12.360 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:11 vm01 ceph-mon[50525]: pgmap v79: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:12.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:11 vm08 ceph-mon[52886]: pgmap v79: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:14.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:13 vm09 ceph-mon[55409]: pgmap v80: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:14.360 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:13 vm01 ceph-mon[50525]: pgmap v80: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:14.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:13 vm08 ceph-mon[52886]: pgmap v80: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:16.106 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:15 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:44:16.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:15 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:44:16.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:15 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:44:17.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:16 vm09 ceph-mon[55409]: pgmap v81: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:17.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:16 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:44:17.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:16 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T06:44:17.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:16 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:44:17.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:16 vm01 ceph-mon[50525]: pgmap v81: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:17.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:16 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:44:17.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:16 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T06:44:17.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:16 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:44:17.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:16 vm08 ceph-mon[52886]: pgmap v81: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:17.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:16 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:44:17.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:16 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T06:44:17.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:16 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:44:18.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:17 vm09 ceph-mon[55409]: pgmap v82: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:18.360 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:17 vm01 ceph-mon[50525]: pgmap v82: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:18.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:17 vm08 ceph-mon[52886]: pgmap v82: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:20.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:19 vm09 ceph-mon[55409]: pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:20.360 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:19 vm01 ceph-mon[50525]: pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:20.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:19 vm08 ceph-mon[52886]: pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:22.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:21 vm09 ceph-mon[55409]: pgmap v84: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:22.360 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:21 vm01 ceph-mon[50525]: pgmap v84: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:22.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:21 vm08 ceph-mon[52886]: pgmap v84: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:24.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:23 vm09 ceph-mon[55409]: pgmap v85: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:24.360 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:23 vm01 ceph-mon[50525]: pgmap v85: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:24.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:23 vm08 ceph-mon[52886]: pgmap v85: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:26.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:25 vm09 ceph-mon[55409]: pgmap v86: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:26.360 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:25 vm01 ceph-mon[50525]: pgmap v86: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:26.393 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:25 vm08 ceph-mon[52886]: pgmap v86: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:28.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:27 vm09 ceph-mon[55409]: pgmap v87: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:28.360 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:27 vm01 ceph-mon[50525]: pgmap v87: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:28.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:27 vm08 ceph-mon[52886]: pgmap v87: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:28.583 INFO:teuthology.orchestra.run.vm01.stderr:+ ceph orch ps --refresh 2026-03-10T06:44:28.737 INFO:teuthology.orchestra.run.vm01.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T06:44:28.737 INFO:teuthology.orchestra.run.vm01.stdout:mgr.a vm01 *:9283,8765 running (2m) 2m ago 2m 537M - 19.2.3-678-ge911bdeb 654f31e6858e b20959c38dd5 2026-03-10T06:44:28.737 INFO:teuthology.orchestra.run.vm01.stdout:mgr.b vm08 *:8443,8765 running (2m) 111s ago 2m 485M - 19.2.3-678-ge911bdeb 654f31e6858e e75496133b4d 2026-03-10T06:44:28.737 INFO:teuthology.orchestra.run.vm01.stdout:mon.a vm01 running (2m) 2m ago 2m 41.8M 2048M 19.2.3-678-ge911bdeb 654f31e6858e c842826e15e7 2026-03-10T06:44:28.737 INFO:teuthology.orchestra.run.vm01.stdout:mon.b vm08 running (2m) 111s ago 2m 36.8M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 18d522ffc1a6 2026-03-10T06:44:28.737 INFO:teuthology.orchestra.run.vm01.stdout:mon.c vm09 running (2m) 102s ago 2m 38.1M 2048M 19.2.3-678-ge911bdeb 654f31e6858e cfa4ec60919b 2026-03-10T06:44:28.737 INFO:teuthology.orchestra.run.vm01.stdout:osd.0 vm01 running (2m) 2m ago 2m 12.9M 4096M 19.2.3-678-ge911bdeb 654f31e6858e a0d372a27f59 2026-03-10T06:44:28.737 INFO:teuthology.orchestra.run.vm01.stdout:osd.1 vm08 running (112s) 111s ago 112s 15.0M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 8e9a1c57b174 2026-03-10T06:44:28.737 INFO:teuthology.orchestra.run.vm01.stdout:osd.2 vm09 running (103s) 102s ago 103s 13.2M 4353M 19.2.3-678-ge911bdeb 654f31e6858e 30cac70de41c 2026-03-10T06:44:28.762 INFO:teuthology.orchestra.run.vm01.stderr:+ ceph orch ls --service-name mon --export 2026-03-10T06:44:28.940 INFO:teuthology.orchestra.run.vm01.stderr:+ ceph orch apply -i mon.yaml 2026-03-10T06:44:29.132 INFO:teuthology.orchestra.run.vm01.stdout:Scheduled mon update... 2026-03-10T06:44:29.152 INFO:teuthology.orchestra.run.vm01.stderr:+ sleep 90 2026-03-10T06:44:29.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:28 vm09 ceph-mon[55409]: from='client.24293 -' entity='client.admin' cmd=[{"prefix": "orch ps", "refresh": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T06:44:29.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:28 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:44:29.360 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:28 vm01 ceph-mon[50525]: from='client.24293 -' entity='client.admin' cmd=[{"prefix": "orch ps", "refresh": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T06:44:29.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:28 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:44:29.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:28 vm08 ceph-mon[52886]: from='client.24293 -' entity='client.admin' cmd=[{"prefix": "orch ps", "refresh": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T06:44:29.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:28 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:44:30.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:30 vm08 ceph-mon[52886]: from='client.14442 -' entity='client.admin' cmd=[{"prefix": "orch ls", "service_name": "mon", "export": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T06:44:30.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:30 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:44:30.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:30 vm08 ceph-mon[52886]: from='client.14448 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T06:44:30.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:30 vm08 ceph-mon[52886]: Saving service mon spec with placement count:3 2026-03-10T06:44:30.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:30 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:44:30.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:30 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:44:30.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:30 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:44:30.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:30 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:44:30.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:30 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:44:30.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:30 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:44:30.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:30 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:44:30.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:30 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T06:44:30.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:30 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:44:30.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:30 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-10T06:44:30.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:30 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:44:30.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:30 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:44:30.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:30 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:44:30.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:30 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T06:44:30.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:30 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:44:30.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:30 vm08 ceph-mon[52886]: pgmap v88: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:30.606 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:30 vm09 ceph-mon[55409]: from='client.14442 -' entity='client.admin' cmd=[{"prefix": "orch ls", "service_name": "mon", "export": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T06:44:30.606 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:30 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:44:30.607 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:30 vm09 ceph-mon[55409]: from='client.14448 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T06:44:30.607 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:30 vm09 ceph-mon[55409]: Saving service mon spec with placement count:3 2026-03-10T06:44:30.607 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:30 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:44:30.607 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:30 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:44:30.607 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:30 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:44:30.607 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:30 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:44:30.607 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:30 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:44:30.607 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:30 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:44:30.607 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:30 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:44:30.607 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:30 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T06:44:30.607 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:30 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:44:30.607 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:30 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-10T06:44:30.607 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:30 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:44:30.607 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:30 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:44:30.607 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:30 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:44:30.607 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:30 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T06:44:30.607 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:30 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:44:30.607 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:30 vm09 ceph-mon[55409]: pgmap v88: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:30.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:30 vm01 ceph-mon[50525]: from='client.14442 -' entity='client.admin' cmd=[{"prefix": "orch ls", "service_name": "mon", "export": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T06:44:30.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:30 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:44:30.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:30 vm01 ceph-mon[50525]: from='client.14448 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T06:44:30.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:30 vm01 ceph-mon[50525]: Saving service mon spec with placement count:3 2026-03-10T06:44:30.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:30 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:44:30.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:30 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:44:30.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:30 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:44:30.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:30 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:44:30.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:30 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:44:30.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:30 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:44:30.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:30 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:44:30.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:30 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T06:44:30.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:30 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:44:30.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:30 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-10T06:44:30.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:30 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:44:30.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:30 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:44:30.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:30 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:44:30.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:30 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T06:44:30.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:30 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:44:30.611 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:30 vm01 ceph-mon[50525]: pgmap v88: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:32.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:31 vm09 ceph-mon[55409]: pgmap v89: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:32.360 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:31 vm01 ceph-mon[50525]: pgmap v89: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:32.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:31 vm08 ceph-mon[52886]: pgmap v89: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:34.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:33 vm09 ceph-mon[55409]: pgmap v90: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:34.360 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:33 vm01 ceph-mon[50525]: pgmap v90: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:34.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:33 vm08 ceph-mon[52886]: pgmap v90: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:36.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:35 vm09 ceph-mon[55409]: pgmap v91: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:36.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:35 vm01 ceph-mon[50525]: pgmap v91: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:36.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:35 vm08 ceph-mon[52886]: pgmap v91: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:38.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:37 vm09 ceph-mon[55409]: pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:38.360 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:37 vm01 ceph-mon[50525]: pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:38.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:37 vm08 ceph-mon[52886]: pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:40.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:39 vm09 ceph-mon[55409]: pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:40.360 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:39 vm01 ceph-mon[50525]: pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:40.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:39 vm08 ceph-mon[52886]: pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:42.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:41 vm09 ceph-mon[55409]: pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:42.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:41 vm01 ceph-mon[50525]: pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:42.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:41 vm08 ceph-mon[52886]: pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:44.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:43 vm09 ceph-mon[55409]: pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:44.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:43 vm01 ceph-mon[50525]: pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:44.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:43 vm08 ceph-mon[52886]: pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:46.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:45 vm09 ceph-mon[55409]: pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:46.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:45 vm01 ceph-mon[50525]: pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:46.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:45 vm08 ceph-mon[52886]: pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:48.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:47 vm09 ceph-mon[55409]: pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:48.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:47 vm01 ceph-mon[50525]: pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:48.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:47 vm08 ceph-mon[52886]: pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:50.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:49 vm09 ceph-mon[55409]: pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:50.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:49 vm01 ceph-mon[50525]: pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:50.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:49 vm08 ceph-mon[52886]: pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:52.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:51 vm09 ceph-mon[55409]: pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:52.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:51 vm01 ceph-mon[50525]: pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:52.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:51 vm08 ceph-mon[52886]: pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:54.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:53 vm09 ceph-mon[55409]: pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:54.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:53 vm01 ceph-mon[50525]: pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:54.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:53 vm08 ceph-mon[52886]: pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:56.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:55 vm09 ceph-mon[55409]: pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:56.360 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:55 vm01 ceph-mon[50525]: pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:56.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:55 vm08 ceph-mon[52886]: pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:58.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:57 vm09 ceph-mon[55409]: pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:58.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:57 vm01 ceph-mon[50525]: pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:44:58.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:57 vm08 ceph-mon[52886]: pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:00.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:44:59 vm09 ceph-mon[55409]: pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:00.360 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:44:59 vm01 ceph-mon[50525]: pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:00.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:44:59 vm08 ceph-mon[52886]: pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:02.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:45:01 vm09 ceph-mon[55409]: pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:02.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:45:01 vm01 ceph-mon[50525]: pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:02.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:45:01 vm08 ceph-mon[52886]: pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:04.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:45:03 vm09 ceph-mon[55409]: pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:04.360 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:45:03 vm01 ceph-mon[50525]: pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:04.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:45:03 vm08 ceph-mon[52886]: pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:06.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:45:05 vm09 ceph-mon[55409]: pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:06.360 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:45:05 vm01 ceph-mon[50525]: pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:06.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:45:05 vm08 ceph-mon[52886]: pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:08.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:45:07 vm09 ceph-mon[55409]: pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:08.360 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:45:07 vm01 ceph-mon[50525]: pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:08.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:45:07 vm08 ceph-mon[52886]: pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:11.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:45:10 vm09 ceph-mon[55409]: pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:11.360 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:45:10 vm01 ceph-mon[50525]: pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:11.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:45:10 vm08 ceph-mon[52886]: pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:13.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:45:12 vm09 ceph-mon[55409]: pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:13.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:45:12 vm01 ceph-mon[50525]: pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:13.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:45:12 vm08 ceph-mon[52886]: pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:15.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:45:14 vm09 ceph-mon[55409]: pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:15.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:45:14 vm01 ceph-mon[50525]: pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:15.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:45:14 vm08 ceph-mon[52886]: pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:17.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:45:16 vm09 ceph-mon[55409]: pgmap v111: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:17.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:45:16 vm01 ceph-mon[50525]: pgmap v111: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:17.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:45:16 vm08 ceph-mon[52886]: pgmap v111: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:19.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:45:18 vm09 ceph-mon[55409]: pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:19.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:45:18 vm01 ceph-mon[50525]: pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:19.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:45:18 vm08 ceph-mon[52886]: pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:21.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:45:21 vm09 ceph-mon[55409]: pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:21.360 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:45:21 vm01 ceph-mon[50525]: pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:21.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:45:21 vm08 ceph-mon[52886]: pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:23.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:45:23 vm09 ceph-mon[55409]: pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:23.360 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:45:23 vm01 ceph-mon[50525]: pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:23.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:45:23 vm08 ceph-mon[52886]: pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:25.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:45:25 vm09 ceph-mon[55409]: pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:25.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:45:25 vm01 ceph-mon[50525]: pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:25.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:45:25 vm08 ceph-mon[52886]: pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:27.356 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:45:27 vm09 ceph-mon[55409]: pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:27.360 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:45:27 vm01 ceph-mon[50525]: pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:27.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:45:27 vm08 ceph-mon[52886]: pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:29.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:45:29 vm09 ceph-mon[55409]: pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:29.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:45:29 vm01 ceph-mon[50525]: pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:29.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:45:29 vm08 ceph-mon[52886]: pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:30.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:45:30 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:45:30.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:45:30 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:45:30.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:45:30 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T06:45:30.357 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:45:30 vm09 ceph-mon[55409]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:45:30.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:45:30 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:45:30.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:45:30 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:45:30.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:45:30 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T06:45:30.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:45:30 vm01 ceph-mon[50525]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:45:30.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:45:30 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T06:45:30.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:45:30 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T06:45:30.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:45:30 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T06:45:30.394 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:45:30 vm08 ceph-mon[52886]: from='mgr.14150 192.168.123.101:0/1532114153' entity='mgr.a' 2026-03-10T06:45:31.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:45:31 vm08 ceph-mon[52886]: pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:32.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:45:31 vm09 ceph-mon[55409]: pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:32.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:45:31 vm01 ceph-mon[50525]: pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:33.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:45:33 vm08 ceph-mon[52886]: pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:34.106 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:45:33 vm09 ceph-mon[55409]: pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:34.110 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:45:33 vm01 ceph-mon[50525]: pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:35.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:45:35 vm08 ceph-mon[52886]: pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:36.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:45:35 vm09 ceph-mon[55409]: pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:36.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:45:35 vm01 ceph-mon[50525]: pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:37.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:45:37 vm08 ceph-mon[52886]: pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:38.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:45:37 vm09 ceph-mon[55409]: pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:38.110 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:45:37 vm01 ceph-mon[50525]: pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:39.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:45:39 vm08 ceph-mon[52886]: pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:40.106 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:45:39 vm09 ceph-mon[55409]: pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:40.110 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:45:39 vm01 ceph-mon[50525]: pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:41.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:45:41 vm08 ceph-mon[52886]: pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:42.106 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:45:41 vm09 ceph-mon[55409]: pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:42.110 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:45:41 vm01 ceph-mon[50525]: pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:43.894 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:45:43 vm08 ceph-mon[52886]: pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:44.106 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:45:43 vm09 ceph-mon[55409]: pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:44.110 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:45:43 vm01 ceph-mon[50525]: pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:46.106 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:45:45 vm09 ceph-mon[55409]: pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:46.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:45:45 vm01 ceph-mon[50525]: pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:46.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:45:45 vm08 ceph-mon[52886]: pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:48.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:45:47 vm09 ceph-mon[55409]: pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:48.110 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:45:47 vm01 ceph-mon[50525]: pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:48.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:45:47 vm08 ceph-mon[52886]: pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:50.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:45:49 vm09 ceph-mon[55409]: pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:50.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:45:49 vm01 ceph-mon[50525]: pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:50.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:45:49 vm08 ceph-mon[52886]: pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:52.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:45:51 vm09 ceph-mon[55409]: pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:52.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:45:51 vm01 ceph-mon[50525]: pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:52.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:45:51 vm08 ceph-mon[52886]: pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:54.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:45:53 vm09 ceph-mon[55409]: pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:54.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:45:53 vm01 ceph-mon[50525]: pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:54.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:45:53 vm08 ceph-mon[52886]: pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:56.106 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:45:55 vm09 ceph-mon[55409]: pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:56.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:45:55 vm01 ceph-mon[50525]: pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:56.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:45:55 vm08 ceph-mon[52886]: pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:58.107 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:45:57 vm09 ceph-mon[55409]: pgmap v131: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:58.111 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:45:57 vm01 ceph-mon[50525]: pgmap v131: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:58.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:45:57 vm08 ceph-mon[52886]: pgmap v131: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:59.153 INFO:teuthology.orchestra.run.vm01.stderr:+ ceph mon dump 2026-03-10T06:45:59.335 INFO:teuthology.orchestra.run.vm01.stdout:epoch 6 2026-03-10T06:45:59.335 INFO:teuthology.orchestra.run.vm01.stdout:fsid 0204d884-1c4c-11f1-a017-9957fb527c0e 2026-03-10T06:45:59.335 INFO:teuthology.orchestra.run.vm01.stdout:last_changed 2026-03-10T06:43:08.604378+0000 2026-03-10T06:45:59.335 INFO:teuthology.orchestra.run.vm01.stdout:created 2026-03-10T06:41:30.998723+0000 2026-03-10T06:45:59.336 INFO:teuthology.orchestra.run.vm01.stdout:min_mon_release 19 (squid) 2026-03-10T06:45:59.336 INFO:teuthology.orchestra.run.vm01.stdout:election_strategy: 1 2026-03-10T06:45:59.336 INFO:teuthology.orchestra.run.vm01.stdout:0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a; crush_location {datacenter=a} 2026-03-10T06:45:59.336 INFO:teuthology.orchestra.run.vm01.stdout:1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.c; crush_location {datacenter=a,rack=3} 2026-03-10T06:45:59.336 INFO:teuthology.orchestra.run.vm01.stdout:2: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.b; crush_location {datacenter=b,rack=2} 2026-03-10T06:45:59.336 INFO:teuthology.orchestra.run.vm01.stderr:dumped monmap epoch 6 2026-03-10T06:45:59.345 INFO:teuthology.orchestra.run.vm01.stderr:+ ceph mon dump --format json 2026-03-10T06:45:59.533 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:45:59.533 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":6,"fsid":"0204d884-1c4c-11f1-a017-9957fb527c0e","modified":"2026-03-10T06:43:08.604378Z","created":"2026-03-10T06:41:30.998723Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:3300","nonce":0},{"type":"v1","addr":"192.168.123.101:6789","nonce":0}]},"addr":"192.168.123.101:6789/0","public_addr":"192.168.123.101:6789/0","priority":0,"weight":0,"crush_location":"{datacenter=a}"},{"rank":1,"name":"c","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:3300","nonce":0},{"type":"v1","addr":"192.168.123.109:6789","nonce":0}]},"addr":"192.168.123.109:6789/0","public_addr":"192.168.123.109:6789/0","priority":0,"weight":0,"crush_location":"{datacenter=a,rack=3}"},{"rank":2,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:3300","nonce":0},{"type":"v1","addr":"192.168.123.108:6789","nonce":0}]},"addr":"192.168.123.108:6789/0","public_addr":"192.168.123.108:6789/0","priority":0,"weight":0,"crush_location":"{datacenter=b,rack=2}"}],"quorum":[0,1,2]} 2026-03-10T06:45:59.533 INFO:teuthology.orchestra.run.vm01.stderr:dumped monmap epoch 6 2026-03-10T06:45:59.543 INFO:teuthology.orchestra.run.vm01.stderr:+ read monid 2026-03-10T06:45:59.543 INFO:teuthology.orchestra.run.vm01.stderr:+ read crushloc 2026-03-10T06:45:59.543 INFO:teuthology.orchestra.run.vm01.stderr:+ ceph mon dump --format json 2026-03-10T06:45:59.543 INFO:teuthology.orchestra.run.vm01.stderr:+ jq --arg monid a --arg crushloc '{datacenter=a}' -e '.mons | .[] | select(.name == $monid) | .crush_location == $crushloc' 2026-03-10T06:45:59.726 INFO:teuthology.orchestra.run.vm01.stderr:dumped monmap epoch 6 2026-03-10T06:45:59.737 INFO:teuthology.orchestra.run.vm01.stdout:true 2026-03-10T06:45:59.737 INFO:teuthology.orchestra.run.vm01.stderr:+ read monid 2026-03-10T06:45:59.737 INFO:teuthology.orchestra.run.vm01.stderr:+ read crushloc 2026-03-10T06:45:59.737 INFO:teuthology.orchestra.run.vm01.stderr:+ ceph mon dump --format json 2026-03-10T06:45:59.737 INFO:teuthology.orchestra.run.vm01.stderr:+ jq --arg monid b --arg crushloc '{datacenter=b,rack=2}' -e '.mons | .[] | select(.name == $monid) | .crush_location == $crushloc' 2026-03-10T06:45:59.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:45:59 vm01 ceph-mon[50525]: pgmap v132: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:45:59.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:45:59 vm01 ceph-mon[50525]: from='client.? 192.168.123.101:0/1807080322' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch 2026-03-10T06:45:59.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:45:59 vm01 ceph-mon[50525]: from='client.? 192.168.123.101:0/1549522764' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T06:45:59.922 INFO:teuthology.orchestra.run.vm01.stderr:dumped monmap epoch 6 2026-03-10T06:45:59.932 INFO:teuthology.orchestra.run.vm01.stdout:true 2026-03-10T06:45:59.932 INFO:teuthology.orchestra.run.vm01.stderr:+ read monid 2026-03-10T06:45:59.932 INFO:teuthology.orchestra.run.vm01.stderr:+ read crushloc 2026-03-10T06:45:59.932 INFO:teuthology.orchestra.run.vm01.stderr:+ ceph mon dump --format json 2026-03-10T06:45:59.933 INFO:teuthology.orchestra.run.vm01.stderr:+ jq --arg monid c --arg crushloc '{datacenter=a,rack=3}' -e '.mons | .[] | select(.name == $monid) | .crush_location == $crushloc' 2026-03-10T06:46:00.106 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:45:59 vm09 ceph-mon[55409]: pgmap v132: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:46:00.106 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:45:59 vm09 ceph-mon[55409]: from='client.? 192.168.123.101:0/1807080322' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch 2026-03-10T06:46:00.106 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:45:59 vm09 ceph-mon[55409]: from='client.? 192.168.123.101:0/1549522764' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T06:46:00.126 INFO:teuthology.orchestra.run.vm01.stderr:dumped monmap epoch 6 2026-03-10T06:46:00.136 INFO:teuthology.orchestra.run.vm01.stdout:true 2026-03-10T06:46:00.136 INFO:teuthology.orchestra.run.vm01.stderr:+ read monid 2026-03-10T06:46:00.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:45:59 vm08 ceph-mon[52886]: pgmap v132: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:46:00.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:45:59 vm08 ceph-mon[52886]: from='client.? 192.168.123.101:0/1807080322' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch 2026-03-10T06:46:00.144 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:45:59 vm08 ceph-mon[52886]: from='client.? 192.168.123.101:0/1549522764' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T06:46:00.174 DEBUG:teuthology.run_tasks:Unwinding manager cephadm 2026-03-10T06:46:00.176 INFO:tasks.cephadm:Teardown begin 2026-03-10T06:46:00.176 DEBUG:teuthology.orchestra.run.vm01:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T06:46:00.242 DEBUG:teuthology.orchestra.run.vm08:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T06:46:00.268 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T06:46:00.293 INFO:tasks.cephadm:Disabling cephadm mgr module 2026-03-10T06:46:00.293 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e -- ceph mgr module disable cephadm 2026-03-10T06:46:00.468 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/mon.a/config 2026-03-10T06:46:00.486 INFO:teuthology.orchestra.run.vm01.stderr:Error: statfs /etc/ceph/ceph.client.admin.keyring: no such file or directory 2026-03-10T06:46:00.507 DEBUG:teuthology.orchestra.run:got remote process result: 125 2026-03-10T06:46:00.507 INFO:tasks.cephadm:Cleaning up testdir ceph.* files... 2026-03-10T06:46:00.507 DEBUG:teuthology.orchestra.run.vm01:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-10T06:46:00.526 DEBUG:teuthology.orchestra.run.vm08:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-10T06:46:00.549 DEBUG:teuthology.orchestra.run.vm09:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-10T06:46:00.567 INFO:tasks.cephadm:Stopping all daemons... 2026-03-10T06:46:00.567 INFO:tasks.cephadm.mon.a:Stopping mon.a... 2026-03-10T06:46:00.567 DEBUG:teuthology.orchestra.run.vm01:> sudo systemctl stop ceph-0204d884-1c4c-11f1-a017-9957fb527c0e@mon.a 2026-03-10T06:46:00.858 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:46:00 vm01 systemd[1]: Stopping Ceph mon.a for 0204d884-1c4c-11f1-a017-9957fb527c0e... 2026-03-10T06:46:00.858 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:46:00 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mon-a[50521]: 2026-03-10T06:46:00.672+0000 7f488ca37640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T06:46:00.858 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 06:46:00 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mon-a[50521]: 2026-03-10T06:46:00.672+0000 7f488ca37640 -1 mon.a@0(leader) e6 *** Got Signal Terminated *** 2026-03-10T06:46:00.940 DEBUG:teuthology.orchestra.run.vm01:> sudo pkill -f 'journalctl -f -n 0 -u ceph-0204d884-1c4c-11f1-a017-9957fb527c0e@mon.a.service' 2026-03-10T06:46:00.981 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T06:46:00.981 INFO:tasks.cephadm.mon.a:Stopped mon.a 2026-03-10T06:46:00.981 INFO:tasks.cephadm.mon.c:Stopping mon.b... 2026-03-10T06:46:00.982 DEBUG:teuthology.orchestra.run.vm08:> sudo systemctl stop ceph-0204d884-1c4c-11f1-a017-9957fb527c0e@mon.b 2026-03-10T06:46:01.288 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:46:01 vm08 systemd[1]: Stopping Ceph mon.b for 0204d884-1c4c-11f1-a017-9957fb527c0e... 2026-03-10T06:46:01.288 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:46:01 vm08 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mon-b[52882]: 2026-03-10T06:46:01.091+0000 7f08c2ad8640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.b -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T06:46:01.288 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 06:46:01 vm08 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mon-b[52882]: 2026-03-10T06:46:01.091+0000 7f08c2ad8640 -1 mon.b@2(peon) e6 *** Got Signal Terminated *** 2026-03-10T06:46:01.372 DEBUG:teuthology.orchestra.run.vm08:> sudo pkill -f 'journalctl -f -n 0 -u ceph-0204d884-1c4c-11f1-a017-9957fb527c0e@mon.b.service' 2026-03-10T06:46:01.414 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T06:46:01.414 INFO:tasks.cephadm.mon.c:Stopped mon.b 2026-03-10T06:46:01.414 INFO:tasks.cephadm.mon.c:Stopping mon.c... 2026-03-10T06:46:01.414 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ceph-0204d884-1c4c-11f1-a017-9957fb527c0e@mon.c 2026-03-10T06:46:01.693 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:46:01 vm09 systemd[1]: Stopping Ceph mon.c for 0204d884-1c4c-11f1-a017-9957fb527c0e... 2026-03-10T06:46:01.693 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:46:01 vm09 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mon-c[55405]: 2026-03-10T06:46:01.528+0000 7f1eb9c98640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.c -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T06:46:01.693 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:46:01 vm09 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mon-c[55405]: 2026-03-10T06:46:01.528+0000 7f1eb9c98640 -1 mon.c@1(peon) e6 *** Got Signal Terminated *** 2026-03-10T06:46:01.693 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:46:01 vm09 podman[62523]: 2026-03-10 06:46:01.651764893 +0000 UTC m=+0.137722095 container died cfa4ec60919b3f035eafd18cd6463fa7a7fdc8825869b7c905ce513f2934d96b (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mon-c, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0) 2026-03-10T06:46:01.693 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:46:01 vm09 podman[62523]: 2026-03-10 06:46:01.67248272 +0000 UTC m=+0.158439933 container remove cfa4ec60919b3f035eafd18cd6463fa7a7fdc8825869b7c905ce513f2934d96b (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mon-c, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-10T06:46:01.693 INFO:journalctl@ceph.mon.c.vm09.stdout:Mar 10 06:46:01 vm09 bash[62523]: ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-mon-c 2026-03-10T06:46:01.756 DEBUG:teuthology.orchestra.run.vm09:> sudo pkill -f 'journalctl -f -n 0 -u ceph-0204d884-1c4c-11f1-a017-9957fb527c0e@mon.c.service' 2026-03-10T06:46:01.793 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T06:46:01.793 INFO:tasks.cephadm.mon.c:Stopped mon.c 2026-03-10T06:46:01.793 INFO:tasks.cephadm.mgr.a:Stopping mgr.a... 2026-03-10T06:46:01.793 DEBUG:teuthology.orchestra.run.vm01:> sudo systemctl stop ceph-0204d884-1c4c-11f1-a017-9957fb527c0e@mgr.a 2026-03-10T06:46:02.022 DEBUG:teuthology.orchestra.run.vm01:> sudo pkill -f 'journalctl -f -n 0 -u ceph-0204d884-1c4c-11f1-a017-9957fb527c0e@mgr.a.service' 2026-03-10T06:46:02.053 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T06:46:02.053 INFO:tasks.cephadm.mgr.a:Stopped mgr.a 2026-03-10T06:46:02.053 INFO:tasks.cephadm.mgr.b:Stopping mgr.b... 2026-03-10T06:46:02.053 DEBUG:teuthology.orchestra.run.vm08:> sudo systemctl stop ceph-0204d884-1c4c-11f1-a017-9957fb527c0e@mgr.b 2026-03-10T06:46:02.187 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 06:46:02 vm08 systemd[1]: Stopping Ceph mgr.b for 0204d884-1c4c-11f1-a017-9957fb527c0e... 2026-03-10T06:46:02.265 DEBUG:teuthology.orchestra.run.vm08:> sudo pkill -f 'journalctl -f -n 0 -u ceph-0204d884-1c4c-11f1-a017-9957fb527c0e@mgr.b.service' 2026-03-10T06:46:02.292 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T06:46:02.292 INFO:tasks.cephadm.mgr.b:Stopped mgr.b 2026-03-10T06:46:02.292 INFO:tasks.cephadm.osd.0:Stopping osd.0... 2026-03-10T06:46:02.292 DEBUG:teuthology.orchestra.run.vm01:> sudo systemctl stop ceph-0204d884-1c4c-11f1-a017-9957fb527c0e@osd.0 2026-03-10T06:46:02.611 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 06:46:02 vm01 systemd[1]: Stopping Ceph osd.0 for 0204d884-1c4c-11f1-a017-9957fb527c0e... 2026-03-10T06:46:02.611 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 06:46:02 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-osd-0[59922]: 2026-03-10T06:46:02.398+0000 7f74d026b640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T06:46:02.611 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 06:46:02 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-osd-0[59922]: 2026-03-10T06:46:02.398+0000 7f74d026b640 -1 osd.0 19 *** Got signal Terminated *** 2026-03-10T06:46:02.611 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 06:46:02 vm01 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-osd-0[59922]: 2026-03-10T06:46:02.398+0000 7f74d026b640 -1 osd.0 19 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T06:46:07.705 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 06:46:07 vm01 podman[66901]: 2026-03-10 06:46:07.424227821 +0000 UTC m=+5.038298721 container died a0d372a27f5948da9d7bfb39a54349137ff633518677e123f0c427bbeeab95d1 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-osd-0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-10T06:46:07.705 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 06:46:07 vm01 podman[66901]: 2026-03-10 06:46:07.452085731 +0000 UTC m=+5.066156621 container remove a0d372a27f5948da9d7bfb39a54349137ff633518677e123f0c427bbeeab95d1 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-osd-0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default) 2026-03-10T06:46:07.705 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 06:46:07 vm01 bash[66901]: ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-osd-0 2026-03-10T06:46:07.705 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 06:46:07 vm01 podman[66968]: 2026-03-10 06:46:07.61596533 +0000 UTC m=+0.022046193 container create 10b47ce8e4896d74a3058800cd4cd380d86b8f2e4698eb6389d7490870851864 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-osd-0-deactivate, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-10T06:46:07.705 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 06:46:07 vm01 podman[66968]: 2026-03-10 06:46:07.651590426 +0000 UTC m=+0.057671279 container init 10b47ce8e4896d74a3058800cd4cd380d86b8f2e4698eb6389d7490870851864 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-osd-0-deactivate, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0) 2026-03-10T06:46:07.705 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 06:46:07 vm01 podman[66968]: 2026-03-10 06:46:07.65438681 +0000 UTC m=+0.060467673 container start 10b47ce8e4896d74a3058800cd4cd380d86b8f2e4698eb6389d7490870851864 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-osd-0-deactivate, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS) 2026-03-10T06:46:07.705 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 06:46:07 vm01 podman[66968]: 2026-03-10 06:46:07.656030065 +0000 UTC m=+0.062110928 container attach 10b47ce8e4896d74a3058800cd4cd380d86b8f2e4698eb6389d7490870851864 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-osd-0-deactivate, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS) 2026-03-10T06:46:07.705 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 06:46:07 vm01 podman[66968]: 2026-03-10 06:46:07.606117869 +0000 UTC m=+0.012198732 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T06:46:07.839 DEBUG:teuthology.orchestra.run.vm01:> sudo pkill -f 'journalctl -f -n 0 -u ceph-0204d884-1c4c-11f1-a017-9957fb527c0e@osd.0.service' 2026-03-10T06:46:07.876 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T06:46:07.876 INFO:tasks.cephadm.osd.0:Stopped osd.0 2026-03-10T06:46:07.876 INFO:tasks.cephadm.osd.1:Stopping osd.1... 2026-03-10T06:46:07.876 DEBUG:teuthology.orchestra.run.vm08:> sudo systemctl stop ceph-0204d884-1c4c-11f1-a017-9957fb527c0e@osd.1 2026-03-10T06:46:08.394 INFO:journalctl@ceph.osd.1.vm08.stdout:Mar 10 06:46:07 vm08 systemd[1]: Stopping Ceph osd.1 for 0204d884-1c4c-11f1-a017-9957fb527c0e... 2026-03-10T06:46:08.394 INFO:journalctl@ceph.osd.1.vm08.stdout:Mar 10 06:46:07 vm08 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-osd-1[56477]: 2026-03-10T06:46:07.966+0000 7f78e154d640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.1 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T06:46:08.394 INFO:journalctl@ceph.osd.1.vm08.stdout:Mar 10 06:46:07 vm08 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-osd-1[56477]: 2026-03-10T06:46:07.966+0000 7f78e154d640 -1 osd.1 19 *** Got signal Terminated *** 2026-03-10T06:46:08.394 INFO:journalctl@ceph.osd.1.vm08.stdout:Mar 10 06:46:07 vm08 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-osd-1[56477]: 2026-03-10T06:46:07.966+0000 7f78e154d640 -1 osd.1 19 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T06:46:13.337 INFO:journalctl@ceph.osd.1.vm08.stdout:Mar 10 06:46:13 vm08 podman[60705]: 2026-03-10 06:46:13.003064653 +0000 UTC m=+5.048243635 container died 8e9a1c57b17481c407c950dfe5b8d60bfb130556877034be9d3f38ca933edcd3 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-osd-1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0) 2026-03-10T06:46:13.337 INFO:journalctl@ceph.osd.1.vm08.stdout:Mar 10 06:46:13 vm08 podman[60705]: 2026-03-10 06:46:13.016771417 +0000 UTC m=+5.061950399 container remove 8e9a1c57b17481c407c950dfe5b8d60bfb130556877034be9d3f38ca933edcd3 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-osd-1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-10T06:46:13.337 INFO:journalctl@ceph.osd.1.vm08.stdout:Mar 10 06:46:13 vm08 bash[60705]: ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-osd-1 2026-03-10T06:46:13.337 INFO:journalctl@ceph.osd.1.vm08.stdout:Mar 10 06:46:13 vm08 podman[60773]: 2026-03-10 06:46:13.15995943 +0000 UTC m=+0.017064249 container create 56c1a536b8e1a19b2552101be8c1b76a31843d89dc37115d1c72f7a4ba0b0abd (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-osd-1-deactivate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid) 2026-03-10T06:46:13.337 INFO:journalctl@ceph.osd.1.vm08.stdout:Mar 10 06:46:13 vm08 podman[60773]: 2026-03-10 06:46:13.199243984 +0000 UTC m=+0.056348814 container init 56c1a536b8e1a19b2552101be8c1b76a31843d89dc37115d1c72f7a4ba0b0abd (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-osd-1-deactivate, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) 2026-03-10T06:46:13.337 INFO:journalctl@ceph.osd.1.vm08.stdout:Mar 10 06:46:13 vm08 podman[60773]: 2026-03-10 06:46:13.202396856 +0000 UTC m=+0.059501675 container start 56c1a536b8e1a19b2552101be8c1b76a31843d89dc37115d1c72f7a4ba0b0abd (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-osd-1-deactivate, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-10T06:46:13.337 INFO:journalctl@ceph.osd.1.vm08.stdout:Mar 10 06:46:13 vm08 podman[60773]: 2026-03-10 06:46:13.206522788 +0000 UTC m=+0.063627607 container attach 56c1a536b8e1a19b2552101be8c1b76a31843d89dc37115d1c72f7a4ba0b0abd (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-osd-1-deactivate, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, io.buildah.version=1.41.3) 2026-03-10T06:46:13.337 INFO:journalctl@ceph.osd.1.vm08.stdout:Mar 10 06:46:13 vm08 podman[60773]: 2026-03-10 06:46:13.152007267 +0000 UTC m=+0.009112095 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T06:46:13.386 DEBUG:teuthology.orchestra.run.vm08:> sudo pkill -f 'journalctl -f -n 0 -u ceph-0204d884-1c4c-11f1-a017-9957fb527c0e@osd.1.service' 2026-03-10T06:46:13.419 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T06:46:13.419 INFO:tasks.cephadm.osd.1:Stopped osd.1 2026-03-10T06:46:13.419 INFO:tasks.cephadm.osd.2:Stopping osd.2... 2026-03-10T06:46:13.419 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ceph-0204d884-1c4c-11f1-a017-9957fb527c0e@osd.2 2026-03-10T06:46:13.857 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 06:46:13 vm09 systemd[1]: Stopping Ceph osd.2 for 0204d884-1c4c-11f1-a017-9957fb527c0e... 2026-03-10T06:46:13.857 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 06:46:13 vm09 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-osd-2[58808]: 2026-03-10T06:46:13.528+0000 7fa4e2035640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.2 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T06:46:13.857 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 06:46:13 vm09 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-osd-2[58808]: 2026-03-10T06:46:13.528+0000 7fa4e2035640 -1 osd.2 19 *** Got signal Terminated *** 2026-03-10T06:46:13.857 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 06:46:13 vm09 ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-osd-2[58808]: 2026-03-10T06:46:13.528+0000 7fa4e2035640 -1 osd.2 19 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T06:46:18.819 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 06:46:18 vm09 podman[62629]: 2026-03-10 06:46:18.558645744 +0000 UTC m=+5.045147340 container died 30cac70de41c66411f8136b0e1a2e65324c2f987ec2fd009be31b7a978498f0b (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-osd-2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS) 2026-03-10T06:46:18.819 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 06:46:18 vm09 podman[62629]: 2026-03-10 06:46:18.579928061 +0000 UTC m=+5.066429667 container remove 30cac70de41c66411f8136b0e1a2e65324c2f987ec2fd009be31b7a978498f0b (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-osd-2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-10T06:46:18.819 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 06:46:18 vm09 bash[62629]: ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-osd-2 2026-03-10T06:46:18.819 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 06:46:18 vm09 podman[62692]: 2026-03-10 06:46:18.727701053 +0000 UTC m=+0.018550704 container create 13e53626b677db856e4d8fe2c8c42a9422d3f35675ed05c099dfae5848e4e46c (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-osd-2-deactivate, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T06:46:18.819 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 06:46:18 vm09 podman[62692]: 2026-03-10 06:46:18.767850806 +0000 UTC m=+0.058700457 container init 13e53626b677db856e4d8fe2c8c42a9422d3f35675ed05c099dfae5848e4e46c (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-osd-2-deactivate, CEPH_REF=squid, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-10T06:46:18.820 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 06:46:18 vm09 podman[62692]: 2026-03-10 06:46:18.771188594 +0000 UTC m=+0.062038245 container start 13e53626b677db856e4d8fe2c8c42a9422d3f35675ed05c099dfae5848e4e46c (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-osd-2-deactivate, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, OSD_FLAVOR=default) 2026-03-10T06:46:18.820 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 06:46:18 vm09 podman[62692]: 2026-03-10 06:46:18.77329153 +0000 UTC m=+0.064141181 container attach 13e53626b677db856e4d8fe2c8c42a9422d3f35675ed05c099dfae5848e4e46c (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-0204d884-1c4c-11f1-a017-9957fb527c0e-osd-2-deactivate, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True) 2026-03-10T06:46:18.928 DEBUG:teuthology.orchestra.run.vm09:> sudo pkill -f 'journalctl -f -n 0 -u ceph-0204d884-1c4c-11f1-a017-9957fb527c0e@osd.2.service' 2026-03-10T06:46:18.963 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T06:46:18.963 INFO:tasks.cephadm.osd.2:Stopped osd.2 2026-03-10T06:46:18.963 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e --force --keep-logs 2026-03-10T06:46:19.088 INFO:teuthology.orchestra.run.vm01.stdout:Deleting cluster with fsid: 0204d884-1c4c-11f1-a017-9957fb527c0e 2026-03-10T06:46:20.015 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e --force --keep-logs 2026-03-10T06:46:20.146 INFO:teuthology.orchestra.run.vm08.stdout:Deleting cluster with fsid: 0204d884-1c4c-11f1-a017-9957fb527c0e 2026-03-10T06:46:21.082 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e --force --keep-logs 2026-03-10T06:46:21.217 INFO:teuthology.orchestra.run.vm09.stdout:Deleting cluster with fsid: 0204d884-1c4c-11f1-a017-9957fb527c0e 2026-03-10T06:46:21.966 DEBUG:teuthology.orchestra.run.vm01:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T06:46:21.993 DEBUG:teuthology.orchestra.run.vm08:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T06:46:22.018 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T06:46:22.045 INFO:tasks.cephadm:Archiving crash dumps... 2026-03-10T06:46:22.045 DEBUG:teuthology.misc:Transferring archived files from vm01:/var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/crash to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/932/remote/vm01/crash 2026-03-10T06:46:22.045 DEBUG:teuthology.orchestra.run.vm01:> sudo tar c -f - -C /var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/crash -- . 2026-03-10T06:46:22.071 INFO:teuthology.orchestra.run.vm01.stderr:tar: /var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/crash: Cannot open: No such file or directory 2026-03-10T06:46:22.071 INFO:teuthology.orchestra.run.vm01.stderr:tar: Error is not recoverable: exiting now 2026-03-10T06:46:22.072 DEBUG:teuthology.misc:Transferring archived files from vm08:/var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/crash to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/932/remote/vm08/crash 2026-03-10T06:46:22.072 DEBUG:teuthology.orchestra.run.vm08:> sudo tar c -f - -C /var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/crash -- . 2026-03-10T06:46:22.098 INFO:teuthology.orchestra.run.vm08.stderr:tar: /var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/crash: Cannot open: No such file or directory 2026-03-10T06:46:22.098 INFO:teuthology.orchestra.run.vm08.stderr:tar: Error is not recoverable: exiting now 2026-03-10T06:46:22.099 DEBUG:teuthology.misc:Transferring archived files from vm09:/var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/crash to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/932/remote/vm09/crash 2026-03-10T06:46:22.099 DEBUG:teuthology.orchestra.run.vm09:> sudo tar c -f - -C /var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/crash -- . 2026-03-10T06:46:22.127 INFO:teuthology.orchestra.run.vm09.stderr:tar: /var/lib/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/crash: Cannot open: No such file or directory 2026-03-10T06:46:22.127 INFO:teuthology.orchestra.run.vm09.stderr:tar: Error is not recoverable: exiting now 2026-03-10T06:46:22.128 INFO:tasks.cephadm:Checking cluster log for badness... 2026-03-10T06:46:22.129 DEBUG:teuthology.orchestra.run.vm01:> sudo egrep '\[ERR\]|\[WRN\]|\[SEC\]' /var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph.log | egrep CEPHADM_ | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v MON_DOWN | egrep -v POOL_APP_NOT_ENABLED | egrep -v 'mon down' | egrep -v 'mons down' | egrep -v 'out of quorum' | egrep -v CEPHADM_FAILED_DAEMON | head -n 1 2026-03-10T06:46:22.159 INFO:tasks.cephadm:Compressing logs... 2026-03-10T06:46:22.159 DEBUG:teuthology.orchestra.run.vm01:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T06:46:22.200 DEBUG:teuthology.orchestra.run.vm08:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T06:46:22.202 DEBUG:teuthology.orchestra.run.vm09:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T06:46:22.223 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-10T06:46:22.223 INFO:teuthology.orchestra.run.vm01.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-10T06:46:22.224 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph-mon.a.log 2026-03-10T06:46:22.225 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/cephadm.log: gzip -5 --verbose -- /var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph.log 2026-03-10T06:46:22.226 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph-mon.a.log: 89.9% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-10T06:46:22.226 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph-mgr.a.log 2026-03-10T06:46:22.227 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph.log: 86.4% -- replaced with /var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph.log.gz 2026-03-10T06:46:22.227 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph.audit.log 2026-03-10T06:46:22.229 INFO:teuthology.orchestra.run.vm09.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-10T06:46:22.230 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-10T06:46:22.230 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph-volume.log 2026-03-10T06:46:22.230 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-10T06:46:22.230 INFO:teuthology.orchestra.run.vm08.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-10T06:46:22.231 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/cephadm.log: gzip -5 --verbose -- /var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph-mon.c.log 2026-03-10T06:46:22.231 INFO:teuthology.orchestra.run.vm09.stderr: 88.9% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-10T06:46:22.231 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph-volume.log: gzip -5 --verbose -- /var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph.cephadm.log 2026-03-10T06:46:22.232 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph-volume.log 2026-03-10T06:46:22.232 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/cephadm.log: 88.5% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-10T06:46:22.232 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph-mon.b.log 2026-03-10T06:46:22.232 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph-mon.c.log: 94.7% -- replaced with /var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph-volume.log.gz 2026-03-10T06:46:22.233 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph.audit.log 2026-03-10T06:46:22.234 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph.cephadm.log: 80.5% -- replaced with /var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph.cephadm.log.gz 2026-03-10T06:46:22.234 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph.log 2026-03-10T06:46:22.234 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph.audit.log: 90.3% -- replaced with /var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph.audit.log.gz 2026-03-10T06:46:22.235 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph-volume.log: 94.8% -- replaced with /var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph-volume.log.gz 2026-03-10T06:46:22.235 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph-osd.2.log 2026-03-10T06:46:22.235 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph-mgr.a.log: gzip -5 --verbose -- /var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph.cephadm.log 2026-03-10T06:46:22.235 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph.cephadm.log 2026-03-10T06:46:22.236 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph.audit.log 2026-03-10T06:46:22.236 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph.log: 86.6% -- replaced with /var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph.log.gz 2026-03-10T06:46:22.236 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph-mon.b.log: /var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph.cephadm.log: 80.5% -- replaced with /var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph.cephadm.log.gz 2026-03-10T06:46:22.236 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph.audit.log: 89.9% -- replaced with /var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph.audit.log.gz 2026-03-10T06:46:22.236 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph.log 2026-03-10T06:46:22.237 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph.audit.log: 90.3% -- replaced with /var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph.audit.log.gz 2026-03-10T06:46:22.237 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph-mgr.b.log 2026-03-10T06:46:22.238 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph.log: 86.6% -- replaced with /var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph.log.gz 2026-03-10T06:46:22.238 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph-volume.log 2026-03-10T06:46:22.238 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph-osd.1.log 2026-03-10T06:46:22.239 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph.cephadm.log: 81.4% -- replaced with /var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph.cephadm.log.gz 2026-03-10T06:46:22.241 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph-mgr.b.log: 90.8% -- replaced with /var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph-mgr.b.log.gz 2026-03-10T06:46:22.242 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph-osd.0.log 2026-03-10T06:46:22.245 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph-volume.log: 94.8% -- replaced with /var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph-volume.log.gz 2026-03-10T06:46:22.251 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph-osd.2.log: 93.4% -- replaced with /var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph-osd.2.log.gz 2026-03-10T06:46:22.263 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph-osd.1.log: 93.5% -- replaced with /var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph-osd.1.log.gz 2026-03-10T06:46:22.264 INFO:teuthology.orchestra.run.vm09.stderr: 92.8% -- replaced with /var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph-mon.c.log.gz 2026-03-10T06:46:22.266 INFO:teuthology.orchestra.run.vm09.stderr: 2026-03-10T06:46:22.266 INFO:teuthology.orchestra.run.vm09.stderr:real 0m0.047s 2026-03-10T06:46:22.266 INFO:teuthology.orchestra.run.vm09.stderr:user 0m0.055s 2026-03-10T06:46:22.266 INFO:teuthology.orchestra.run.vm09.stderr:sys 0m0.018s 2026-03-10T06:46:22.268 INFO:teuthology.orchestra.run.vm08.stderr: 93.1% -- replaced with /var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph-mon.b.log.gz 2026-03-10T06:46:22.270 INFO:teuthology.orchestra.run.vm08.stderr: 2026-03-10T06:46:22.270 INFO:teuthology.orchestra.run.vm08.stderr:real 0m0.052s 2026-03-10T06:46:22.270 INFO:teuthology.orchestra.run.vm08.stderr:user 0m0.063s 2026-03-10T06:46:22.270 INFO:teuthology.orchestra.run.vm08.stderr:sys 0m0.025s 2026-03-10T06:46:22.277 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph-osd.0.log: 90.8% -- replaced with /var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph-mgr.a.log.gz 2026-03-10T06:46:22.277 INFO:teuthology.orchestra.run.vm01.stderr: 93.4% -- replaced with /var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph-osd.0.log.gz 2026-03-10T06:46:22.328 INFO:teuthology.orchestra.run.vm01.stderr: 91.4% -- replaced with /var/log/ceph/0204d884-1c4c-11f1-a017-9957fb527c0e/ceph-mon.a.log.gz 2026-03-10T06:46:22.330 INFO:teuthology.orchestra.run.vm01.stderr: 2026-03-10T06:46:22.330 INFO:teuthology.orchestra.run.vm01.stderr:real 0m0.117s 2026-03-10T06:46:22.330 INFO:teuthology.orchestra.run.vm01.stderr:user 0m0.156s 2026-03-10T06:46:22.330 INFO:teuthology.orchestra.run.vm01.stderr:sys 0m0.017s 2026-03-10T06:46:22.330 INFO:tasks.cephadm:Archiving logs... 2026-03-10T06:46:22.331 DEBUG:teuthology.misc:Transferring archived files from vm01:/var/log/ceph to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/932/remote/vm01/log 2026-03-10T06:46:22.331 DEBUG:teuthology.orchestra.run.vm01:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-10T06:46:22.407 DEBUG:teuthology.misc:Transferring archived files from vm08:/var/log/ceph to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/932/remote/vm08/log 2026-03-10T06:46:22.407 DEBUG:teuthology.orchestra.run.vm08:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-10T06:46:22.440 DEBUG:teuthology.misc:Transferring archived files from vm09:/var/log/ceph to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/932/remote/vm09/log 2026-03-10T06:46:22.440 DEBUG:teuthology.orchestra.run.vm09:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-10T06:46:22.471 INFO:tasks.cephadm:Removing cluster... 2026-03-10T06:46:22.472 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e --force 2026-03-10T06:46:22.599 INFO:teuthology.orchestra.run.vm01.stdout:Deleting cluster with fsid: 0204d884-1c4c-11f1-a017-9957fb527c0e 2026-03-10T06:46:22.816 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e --force 2026-03-10T06:46:22.945 INFO:teuthology.orchestra.run.vm08.stdout:Deleting cluster with fsid: 0204d884-1c4c-11f1-a017-9957fb527c0e 2026-03-10T06:46:23.162 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 0204d884-1c4c-11f1-a017-9957fb527c0e --force 2026-03-10T06:46:23.292 INFO:teuthology.orchestra.run.vm09.stdout:Deleting cluster with fsid: 0204d884-1c4c-11f1-a017-9957fb527c0e 2026-03-10T06:46:23.527 INFO:tasks.cephadm:Removing cephadm ... 2026-03-10T06:46:23.528 DEBUG:teuthology.orchestra.run.vm01:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-10T06:46:23.545 DEBUG:teuthology.orchestra.run.vm08:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-10T06:46:23.560 DEBUG:teuthology.orchestra.run.vm09:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-10T06:46:23.578 INFO:tasks.cephadm:Teardown complete 2026-03-10T06:46:23.578 DEBUG:teuthology.run_tasks:Unwinding manager install 2026-03-10T06:46:23.580 INFO:teuthology.task.install.util:Removing shipped files: /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer... 2026-03-10T06:46:23.580 DEBUG:teuthology.orchestra.run.vm01:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-10T06:46:23.588 DEBUG:teuthology.orchestra.run.vm08:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-10T06:46:23.602 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-10T06:46:23.656 INFO:teuthology.task.install.rpm:Removing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd on rpm system. 2026-03-10T06:46:23.657 DEBUG:teuthology.orchestra.run.vm01:> 2026-03-10T06:46:23.657 DEBUG:teuthology.orchestra.run.vm01:> for d in ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd ; do 2026-03-10T06:46:23.657 DEBUG:teuthology.orchestra.run.vm01:> sudo yum -y remove $d || true 2026-03-10T06:46:23.657 DEBUG:teuthology.orchestra.run.vm01:> done 2026-03-10T06:46:23.662 INFO:teuthology.task.install.rpm:Removing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd on rpm system. 2026-03-10T06:46:23.663 DEBUG:teuthology.orchestra.run.vm08:> 2026-03-10T06:46:23.663 DEBUG:teuthology.orchestra.run.vm08:> for d in ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd ; do 2026-03-10T06:46:23.663 DEBUG:teuthology.orchestra.run.vm08:> sudo yum -y remove $d || true 2026-03-10T06:46:23.663 DEBUG:teuthology.orchestra.run.vm08:> done 2026-03-10T06:46:23.667 INFO:teuthology.task.install.rpm:Removing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd on rpm system. 2026-03-10T06:46:23.667 DEBUG:teuthology.orchestra.run.vm09:> 2026-03-10T06:46:23.667 DEBUG:teuthology.orchestra.run.vm09:> for d in ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd ; do 2026-03-10T06:46:23.667 DEBUG:teuthology.orchestra.run.vm09:> sudo yum -y remove $d || true 2026-03-10T06:46:23.667 DEBUG:teuthology.orchestra.run.vm09:> done 2026-03-10T06:46:23.849 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T06:46:23.850 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T06:46:23.850 INFO:teuthology.orchestra.run.vm01.stdout: Package Arch Version Repository Size 2026-03-10T06:46:23.850 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T06:46:23.850 INFO:teuthology.orchestra.run.vm01.stdout:Removing: 2026-03-10T06:46:23.850 INFO:teuthology.orchestra.run.vm01.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 39 M 2026-03-10T06:46:23.850 INFO:teuthology.orchestra.run.vm01.stdout:Removing unused dependencies: 2026-03-10T06:46:23.850 INFO:teuthology.orchestra.run.vm01.stdout: mailcap noarch 2.1.49-5.el9 @baseos 78 k 2026-03-10T06:46:23.850 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:46:23.850 INFO:teuthology.orchestra.run.vm01.stdout:Transaction Summary 2026-03-10T06:46:23.850 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T06:46:23.850 INFO:teuthology.orchestra.run.vm01.stdout:Remove 2 Packages 2026-03-10T06:46:23.850 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:46:23.850 INFO:teuthology.orchestra.run.vm01.stdout:Freed space: 39 M 2026-03-10T06:46:23.850 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction check 2026-03-10T06:46:23.852 INFO:teuthology.orchestra.run.vm01.stdout:Transaction check succeeded. 2026-03-10T06:46:23.852 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction test 2026-03-10T06:46:23.858 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:46:23.858 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:46:23.858 INFO:teuthology.orchestra.run.vm08.stdout: Package Arch Version Repository Size 2026-03-10T06:46:23.858 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:46:23.858 INFO:teuthology.orchestra.run.vm08.stdout:Removing: 2026-03-10T06:46:23.858 INFO:teuthology.orchestra.run.vm08.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 39 M 2026-03-10T06:46:23.858 INFO:teuthology.orchestra.run.vm08.stdout:Removing unused dependencies: 2026-03-10T06:46:23.858 INFO:teuthology.orchestra.run.vm08.stdout: mailcap noarch 2.1.49-5.el9 @baseos 78 k 2026-03-10T06:46:23.858 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:46:23.858 INFO:teuthology.orchestra.run.vm08.stdout:Transaction Summary 2026-03-10T06:46:23.858 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:46:23.858 INFO:teuthology.orchestra.run.vm08.stdout:Remove 2 Packages 2026-03-10T06:46:23.858 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:46:23.858 INFO:teuthology.orchestra.run.vm08.stdout:Freed space: 39 M 2026-03-10T06:46:23.859 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction check 2026-03-10T06:46:23.861 INFO:teuthology.orchestra.run.vm08.stdout:Transaction check succeeded. 2026-03-10T06:46:23.861 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction test 2026-03-10T06:46:23.866 INFO:teuthology.orchestra.run.vm01.stdout:Transaction test succeeded. 2026-03-10T06:46:23.866 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction 2026-03-10T06:46:23.874 INFO:teuthology.orchestra.run.vm08.stdout:Transaction test succeeded. 2026-03-10T06:46:23.874 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction 2026-03-10T06:46:23.878 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T06:46:23.879 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T06:46:23.879 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-10T06:46:23.879 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T06:46:23.879 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-10T06:46:23.879 INFO:teuthology.orchestra.run.vm09.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 39 M 2026-03-10T06:46:23.879 INFO:teuthology.orchestra.run.vm09.stdout:Removing unused dependencies: 2026-03-10T06:46:23.879 INFO:teuthology.orchestra.run.vm09.stdout: mailcap noarch 2.1.49-5.el9 @baseos 78 k 2026-03-10T06:46:23.879 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:46:23.879 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-10T06:46:23.879 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T06:46:23.879 INFO:teuthology.orchestra.run.vm09.stdout:Remove 2 Packages 2026-03-10T06:46:23.879 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:46:23.879 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 39 M 2026-03-10T06:46:23.879 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-10T06:46:23.882 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-10T06:46:23.882 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-10T06:46:23.895 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-10T06:46:23.895 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-10T06:46:23.900 INFO:teuthology.orchestra.run.vm01.stdout: Preparing : 1/1 2026-03-10T06:46:23.906 INFO:teuthology.orchestra.run.vm08.stdout: Preparing : 1/1 2026-03-10T06:46:23.924 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T06:46:23.924 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:46:23.924 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-10T06:46:23.924 INFO:teuthology.orchestra.run.vm01.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-radosgw.target". 2026-03-10T06:46:23.924 INFO:teuthology.orchestra.run.vm01.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-radosgw.target". 2026-03-10T06:46:23.924 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:46:23.928 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T06:46:23.929 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T06:46:23.929 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:46:23.929 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-10T06:46:23.929 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-radosgw.target". 2026-03-10T06:46:23.929 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-radosgw.target". 2026-03-10T06:46:23.929 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:46:23.931 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-10T06:46:23.932 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T06:46:23.938 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T06:46:23.941 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T06:46:23.952 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T06:46:23.953 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T06:46:23.953 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:46:23.953 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-10T06:46:23.954 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-radosgw.target". 2026-03-10T06:46:23.954 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-radosgw.target". 2026-03-10T06:46:23.954 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:46:23.955 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T06:46:23.957 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T06:46:23.964 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T06:46:23.979 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T06:46:24.016 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T06:46:24.016 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T06:46:24.025 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T06:46:24.025 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T06:46:24.045 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T06:46:24.046 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T06:46:24.062 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T06:46:24.062 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:46:24.062 INFO:teuthology.orchestra.run.vm08.stdout:Removed: 2026-03-10T06:46:24.062 INFO:teuthology.orchestra.run.vm08.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 mailcap-2.1.49-5.el9.noarch 2026-03-10T06:46:24.062 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:46:24.062 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:46:24.081 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T06:46:24.081 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:46:24.081 INFO:teuthology.orchestra.run.vm01.stdout:Removed: 2026-03-10T06:46:24.081 INFO:teuthology.orchestra.run.vm01.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 mailcap-2.1.49-5.el9.noarch 2026-03-10T06:46:24.081 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:46:24.081 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T06:46:24.099 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T06:46:24.099 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:46:24.100 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-10T06:46:24.100 INFO:teuthology.orchestra.run.vm09.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 mailcap-2.1.49-5.el9.noarch 2026-03-10T06:46:24.100 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:46:24.100 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T06:46:24.260 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:46:24.261 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:46:24.261 INFO:teuthology.orchestra.run.vm08.stdout: Package Arch Version Repository Size 2026-03-10T06:46:24.261 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:46:24.261 INFO:teuthology.orchestra.run.vm08.stdout:Removing: 2026-03-10T06:46:24.261 INFO:teuthology.orchestra.run.vm08.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 210 M 2026-03-10T06:46:24.261 INFO:teuthology.orchestra.run.vm08.stdout:Removing unused dependencies: 2026-03-10T06:46:24.261 INFO:teuthology.orchestra.run.vm08.stdout: libxslt x86_64 1.1.34-12.el9 @appstream 743 k 2026-03-10T06:46:24.261 INFO:teuthology.orchestra.run.vm08.stdout: socat x86_64 1.7.4.1-8.el9 @appstream 1.1 M 2026-03-10T06:46:24.261 INFO:teuthology.orchestra.run.vm08.stdout: xmlstarlet x86_64 1.6.1-20.el9 @appstream 195 k 2026-03-10T06:46:24.261 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:46:24.261 INFO:teuthology.orchestra.run.vm08.stdout:Transaction Summary 2026-03-10T06:46:24.261 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:46:24.261 INFO:teuthology.orchestra.run.vm08.stdout:Remove 4 Packages 2026-03-10T06:46:24.261 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:46:24.261 INFO:teuthology.orchestra.run.vm08.stdout:Freed space: 212 M 2026-03-10T06:46:24.261 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction check 2026-03-10T06:46:24.264 INFO:teuthology.orchestra.run.vm08.stdout:Transaction check succeeded. 2026-03-10T06:46:24.264 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction test 2026-03-10T06:46:24.280 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T06:46:24.280 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T06:46:24.281 INFO:teuthology.orchestra.run.vm01.stdout: Package Arch Version Repository Size 2026-03-10T06:46:24.281 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T06:46:24.281 INFO:teuthology.orchestra.run.vm01.stdout:Removing: 2026-03-10T06:46:24.281 INFO:teuthology.orchestra.run.vm01.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 210 M 2026-03-10T06:46:24.281 INFO:teuthology.orchestra.run.vm01.stdout:Removing unused dependencies: 2026-03-10T06:46:24.281 INFO:teuthology.orchestra.run.vm01.stdout: libxslt x86_64 1.1.34-12.el9 @appstream 743 k 2026-03-10T06:46:24.281 INFO:teuthology.orchestra.run.vm01.stdout: socat x86_64 1.7.4.1-8.el9 @appstream 1.1 M 2026-03-10T06:46:24.281 INFO:teuthology.orchestra.run.vm01.stdout: xmlstarlet x86_64 1.6.1-20.el9 @appstream 195 k 2026-03-10T06:46:24.281 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:46:24.281 INFO:teuthology.orchestra.run.vm01.stdout:Transaction Summary 2026-03-10T06:46:24.281 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T06:46:24.281 INFO:teuthology.orchestra.run.vm01.stdout:Remove 4 Packages 2026-03-10T06:46:24.281 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:46:24.281 INFO:teuthology.orchestra.run.vm01.stdout:Freed space: 212 M 2026-03-10T06:46:24.281 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction check 2026-03-10T06:46:24.284 INFO:teuthology.orchestra.run.vm01.stdout:Transaction check succeeded. 2026-03-10T06:46:24.284 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction test 2026-03-10T06:46:24.287 INFO:teuthology.orchestra.run.vm08.stdout:Transaction test succeeded. 2026-03-10T06:46:24.287 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction 2026-03-10T06:46:24.292 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T06:46:24.292 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T06:46:24.292 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-10T06:46:24.292 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T06:46:24.292 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-10T06:46:24.292 INFO:teuthology.orchestra.run.vm09.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 210 M 2026-03-10T06:46:24.292 INFO:teuthology.orchestra.run.vm09.stdout:Removing unused dependencies: 2026-03-10T06:46:24.292 INFO:teuthology.orchestra.run.vm09.stdout: libxslt x86_64 1.1.34-12.el9 @appstream 743 k 2026-03-10T06:46:24.292 INFO:teuthology.orchestra.run.vm09.stdout: socat x86_64 1.7.4.1-8.el9 @appstream 1.1 M 2026-03-10T06:46:24.292 INFO:teuthology.orchestra.run.vm09.stdout: xmlstarlet x86_64 1.6.1-20.el9 @appstream 195 k 2026-03-10T06:46:24.292 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:46:24.292 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-10T06:46:24.292 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T06:46:24.292 INFO:teuthology.orchestra.run.vm09.stdout:Remove 4 Packages 2026-03-10T06:46:24.292 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:46:24.293 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 212 M 2026-03-10T06:46:24.293 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-10T06:46:24.296 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-10T06:46:24.296 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-10T06:46:24.307 INFO:teuthology.orchestra.run.vm01.stdout:Transaction test succeeded. 2026-03-10T06:46:24.307 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction 2026-03-10T06:46:24.322 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-10T06:46:24.322 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-10T06:46:24.351 INFO:teuthology.orchestra.run.vm08.stdout: Preparing : 1/1 2026-03-10T06:46:24.357 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-10T06:46:24.359 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : xmlstarlet-1.6.1-20.el9.x86_64 2/4 2026-03-10T06:46:24.362 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libxslt-1.1.34-12.el9.x86_64 3/4 2026-03-10T06:46:24.371 INFO:teuthology.orchestra.run.vm01.stdout: Preparing : 1/1 2026-03-10T06:46:24.377 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-10T06:46:24.377 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-10T06:46:24.379 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : xmlstarlet-1.6.1-20.el9.x86_64 2/4 2026-03-10T06:46:24.382 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : libxslt-1.1.34-12.el9.x86_64 3/4 2026-03-10T06:46:24.388 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-10T06:46:24.394 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-10T06:46:24.396 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : xmlstarlet-1.6.1-20.el9.x86_64 2/4 2026-03-10T06:46:24.398 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-10T06:46:24.399 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libxslt-1.1.34-12.el9.x86_64 3/4 2026-03-10T06:46:24.414 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-10T06:46:24.439 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-10T06:46:24.439 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-10T06:46:24.439 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 2/4 2026-03-10T06:46:24.439 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 3/4 2026-03-10T06:46:24.470 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-10T06:46:24.470 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-10T06:46:24.470 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 2/4 2026-03-10T06:46:24.470 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 3/4 2026-03-10T06:46:24.492 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 4/4 2026-03-10T06:46:24.492 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:46:24.492 INFO:teuthology.orchestra.run.vm08.stdout:Removed: 2026-03-10T06:46:24.492 INFO:teuthology.orchestra.run.vm08.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 libxslt-1.1.34-12.el9.x86_64 2026-03-10T06:46:24.493 INFO:teuthology.orchestra.run.vm08.stdout: socat-1.7.4.1-8.el9.x86_64 xmlstarlet-1.6.1-20.el9.x86_64 2026-03-10T06:46:24.493 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:46:24.493 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:46:24.493 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-10T06:46:24.493 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-10T06:46:24.493 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 2/4 2026-03-10T06:46:24.493 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 3/4 2026-03-10T06:46:24.524 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 4/4 2026-03-10T06:46:24.524 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:46:24.524 INFO:teuthology.orchestra.run.vm01.stdout:Removed: 2026-03-10T06:46:24.524 INFO:teuthology.orchestra.run.vm01.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 libxslt-1.1.34-12.el9.x86_64 2026-03-10T06:46:24.524 INFO:teuthology.orchestra.run.vm01.stdout: socat-1.7.4.1-8.el9.x86_64 xmlstarlet-1.6.1-20.el9.x86_64 2026-03-10T06:46:24.524 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:46:24.524 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T06:46:24.551 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 4/4 2026-03-10T06:46:24.551 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:46:24.551 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-10T06:46:24.551 INFO:teuthology.orchestra.run.vm09.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 libxslt-1.1.34-12.el9.x86_64 2026-03-10T06:46:24.551 INFO:teuthology.orchestra.run.vm09.stdout: socat-1.7.4.1-8.el9.x86_64 xmlstarlet-1.6.1-20.el9.x86_64 2026-03-10T06:46:24.551 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:46:24.551 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T06:46:24.711 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:46:24.712 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:46:24.712 INFO:teuthology.orchestra.run.vm08.stdout: Package Arch Version Repository Size 2026-03-10T06:46:24.712 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:46:24.712 INFO:teuthology.orchestra.run.vm08.stdout:Removing: 2026-03-10T06:46:24.712 INFO:teuthology.orchestra.run.vm08.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 0 2026-03-10T06:46:24.712 INFO:teuthology.orchestra.run.vm08.stdout:Removing unused dependencies: 2026-03-10T06:46:24.712 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 7.5 M 2026-03-10T06:46:24.712 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 18 M 2026-03-10T06:46:24.712 INFO:teuthology.orchestra.run.vm08.stdout: lua x86_64 5.4.4-4.el9 @appstream 593 k 2026-03-10T06:46:24.712 INFO:teuthology.orchestra.run.vm08.stdout: lua-devel x86_64 5.4.4-4.el9 @crb 49 k 2026-03-10T06:46:24.712 INFO:teuthology.orchestra.run.vm08.stdout: luarocks noarch 3.9.2-5.el9 @epel 692 k 2026-03-10T06:46:24.712 INFO:teuthology.orchestra.run.vm08.stdout: unzip x86_64 6.0-59.el9 @baseos 389 k 2026-03-10T06:46:24.712 INFO:teuthology.orchestra.run.vm08.stdout: zip x86_64 3.0-35.el9 @baseos 724 k 2026-03-10T06:46:24.712 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:46:24.712 INFO:teuthology.orchestra.run.vm08.stdout:Transaction Summary 2026-03-10T06:46:24.712 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:46:24.712 INFO:teuthology.orchestra.run.vm08.stdout:Remove 8 Packages 2026-03-10T06:46:24.712 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:46:24.712 INFO:teuthology.orchestra.run.vm08.stdout:Freed space: 28 M 2026-03-10T06:46:24.712 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction check 2026-03-10T06:46:24.715 INFO:teuthology.orchestra.run.vm08.stdout:Transaction check succeeded. 2026-03-10T06:46:24.715 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction test 2026-03-10T06:46:24.737 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T06:46:24.738 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T06:46:24.738 INFO:teuthology.orchestra.run.vm01.stdout: Package Arch Version Repository Size 2026-03-10T06:46:24.738 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T06:46:24.738 INFO:teuthology.orchestra.run.vm01.stdout:Removing: 2026-03-10T06:46:24.738 INFO:teuthology.orchestra.run.vm01.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 0 2026-03-10T06:46:24.738 INFO:teuthology.orchestra.run.vm01.stdout:Removing unused dependencies: 2026-03-10T06:46:24.738 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 7.5 M 2026-03-10T06:46:24.738 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 18 M 2026-03-10T06:46:24.738 INFO:teuthology.orchestra.run.vm01.stdout: lua x86_64 5.4.4-4.el9 @appstream 593 k 2026-03-10T06:46:24.738 INFO:teuthology.orchestra.run.vm01.stdout: lua-devel x86_64 5.4.4-4.el9 @crb 49 k 2026-03-10T06:46:24.738 INFO:teuthology.orchestra.run.vm01.stdout: luarocks noarch 3.9.2-5.el9 @epel 692 k 2026-03-10T06:46:24.738 INFO:teuthology.orchestra.run.vm01.stdout: unzip x86_64 6.0-59.el9 @baseos 389 k 2026-03-10T06:46:24.738 INFO:teuthology.orchestra.run.vm01.stdout: zip x86_64 3.0-35.el9 @baseos 724 k 2026-03-10T06:46:24.738 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:46:24.738 INFO:teuthology.orchestra.run.vm01.stdout:Transaction Summary 2026-03-10T06:46:24.738 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T06:46:24.738 INFO:teuthology.orchestra.run.vm01.stdout:Remove 8 Packages 2026-03-10T06:46:24.738 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:46:24.738 INFO:teuthology.orchestra.run.vm01.stdout:Freed space: 28 M 2026-03-10T06:46:24.738 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction check 2026-03-10T06:46:24.738 INFO:teuthology.orchestra.run.vm08.stdout:Transaction test succeeded. 2026-03-10T06:46:24.739 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction 2026-03-10T06:46:24.741 INFO:teuthology.orchestra.run.vm01.stdout:Transaction check succeeded. 2026-03-10T06:46:24.741 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction test 2026-03-10T06:46:24.764 INFO:teuthology.orchestra.run.vm01.stdout:Transaction test succeeded. 2026-03-10T06:46:24.764 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction 2026-03-10T06:46:24.768 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T06:46:24.769 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T06:46:24.769 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-10T06:46:24.769 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T06:46:24.769 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-10T06:46:24.769 INFO:teuthology.orchestra.run.vm09.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 0 2026-03-10T06:46:24.769 INFO:teuthology.orchestra.run.vm09.stdout:Removing unused dependencies: 2026-03-10T06:46:24.769 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 7.5 M 2026-03-10T06:46:24.769 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 18 M 2026-03-10T06:46:24.769 INFO:teuthology.orchestra.run.vm09.stdout: lua x86_64 5.4.4-4.el9 @appstream 593 k 2026-03-10T06:46:24.769 INFO:teuthology.orchestra.run.vm09.stdout: lua-devel x86_64 5.4.4-4.el9 @crb 49 k 2026-03-10T06:46:24.769 INFO:teuthology.orchestra.run.vm09.stdout: luarocks noarch 3.9.2-5.el9 @epel 692 k 2026-03-10T06:46:24.769 INFO:teuthology.orchestra.run.vm09.stdout: unzip x86_64 6.0-59.el9 @baseos 389 k 2026-03-10T06:46:24.769 INFO:teuthology.orchestra.run.vm09.stdout: zip x86_64 3.0-35.el9 @baseos 724 k 2026-03-10T06:46:24.769 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:46:24.769 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-10T06:46:24.769 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T06:46:24.769 INFO:teuthology.orchestra.run.vm09.stdout:Remove 8 Packages 2026-03-10T06:46:24.769 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:46:24.769 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 28 M 2026-03-10T06:46:24.769 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-10T06:46:24.772 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-10T06:46:24.772 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-10T06:46:24.781 INFO:teuthology.orchestra.run.vm08.stdout: Preparing : 1/1 2026-03-10T06:46:24.786 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-10T06:46:24.790 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : luarocks-3.9.2-5.el9.noarch 2/8 2026-03-10T06:46:24.791 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : lua-devel-5.4.4-4.el9.x86_64 3/8 2026-03-10T06:46:24.794 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : zip-3.0-35.el9.x86_64 4/8 2026-03-10T06:46:24.796 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-10T06:46:24.796 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-10T06:46:24.797 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : unzip-6.0-59.el9.x86_64 5/8 2026-03-10T06:46:24.798 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : lua-5.4.4-4.el9.x86_64 6/8 2026-03-10T06:46:24.807 INFO:teuthology.orchestra.run.vm01.stdout: Preparing : 1/1 2026-03-10T06:46:24.812 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-10T06:46:24.816 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : luarocks-3.9.2-5.el9.noarch 2/8 2026-03-10T06:46:24.818 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : lua-devel-5.4.4-4.el9.x86_64 3/8 2026-03-10T06:46:24.821 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T06:46:24.821 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:46:24.821 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-10T06:46:24.821 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mds.target". 2026-03-10T06:46:24.821 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mds.target". 2026-03-10T06:46:24.821 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:46:24.821 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : zip-3.0-35.el9.x86_64 4/8 2026-03-10T06:46:24.821 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T06:46:24.823 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : unzip-6.0-59.el9.x86_64 5/8 2026-03-10T06:46:24.825 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : lua-5.4.4-4.el9.x86_64 6/8 2026-03-10T06:46:24.829 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T06:46:24.838 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-10T06:46:24.842 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-10T06:46:24.843 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T06:46:24.843 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:46:24.844 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-10T06:46:24.844 INFO:teuthology.orchestra.run.vm01.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mds.target". 2026-03-10T06:46:24.844 INFO:teuthology.orchestra.run.vm01.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mds.target". 2026-03-10T06:46:24.844 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:46:24.844 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T06:46:24.845 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : luarocks-3.9.2-5.el9.noarch 2/8 2026-03-10T06:46:24.847 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : lua-devel-5.4.4-4.el9.x86_64 3/8 2026-03-10T06:46:24.850 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : zip-3.0-35.el9.x86_64 4/8 2026-03-10T06:46:24.851 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T06:46:24.851 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:46:24.851 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-10T06:46:24.851 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mon.target". 2026-03-10T06:46:24.851 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mon.target". 2026-03-10T06:46:24.851 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:46:24.852 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T06:46:24.853 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : unzip-6.0-59.el9.x86_64 5/8 2026-03-10T06:46:24.853 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T06:46:24.854 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : lua-5.4.4-4.el9.x86_64 6/8 2026-03-10T06:46:24.873 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T06:46:24.873 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:46:24.873 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-10T06:46:24.873 INFO:teuthology.orchestra.run.vm01.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mon.target". 2026-03-10T06:46:24.873 INFO:teuthology.orchestra.run.vm01.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mon.target". 2026-03-10T06:46:24.873 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:46:24.874 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T06:46:24.874 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:46:24.874 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-10T06:46:24.874 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mds.target". 2026-03-10T06:46:24.874 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mds.target". 2026-03-10T06:46:24.874 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:46:24.875 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T06:46:24.875 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T06:46:24.883 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T06:46:24.903 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T06:46:24.903 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:46:24.903 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-10T06:46:24.903 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mon.target". 2026-03-10T06:46:24.903 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mon.target". 2026-03-10T06:46:24.903 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:46:24.905 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T06:46:24.934 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T06:46:24.934 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-10T06:46:24.934 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2/8 2026-03-10T06:46:24.934 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 3/8 2026-03-10T06:46:24.934 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : lua-5.4.4-4.el9.x86_64 4/8 2026-03-10T06:46:24.934 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 5/8 2026-03-10T06:46:24.934 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 6/8 2026-03-10T06:46:24.934 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : unzip-6.0-59.el9.x86_64 7/8 2026-03-10T06:46:24.966 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T06:46:24.966 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-10T06:46:24.966 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2/8 2026-03-10T06:46:24.966 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 3/8 2026-03-10T06:46:24.966 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : lua-5.4.4-4.el9.x86_64 4/8 2026-03-10T06:46:24.966 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 5/8 2026-03-10T06:46:24.966 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 6/8 2026-03-10T06:46:24.966 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : unzip-6.0-59.el9.x86_64 7/8 2026-03-10T06:46:24.989 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : zip-3.0-35.el9.x86_64 8/8 2026-03-10T06:46:24.989 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:46:24.989 INFO:teuthology.orchestra.run.vm08.stdout:Removed: 2026-03-10T06:46:24.989 INFO:teuthology.orchestra.run.vm08.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:24.989 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:24.989 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:24.989 INFO:teuthology.orchestra.run.vm08.stdout: lua-5.4.4-4.el9.x86_64 2026-03-10T06:46:24.989 INFO:teuthology.orchestra.run.vm08.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-10T06:46:24.989 INFO:teuthology.orchestra.run.vm08.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-10T06:46:24.989 INFO:teuthology.orchestra.run.vm08.stdout: unzip-6.0-59.el9.x86_64 2026-03-10T06:46:24.989 INFO:teuthology.orchestra.run.vm08.stdout: zip-3.0-35.el9.x86_64 2026-03-10T06:46:24.989 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:46:24.989 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:46:24.990 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T06:46:24.990 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-10T06:46:24.990 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2/8 2026-03-10T06:46:24.990 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 3/8 2026-03-10T06:46:24.990 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : lua-5.4.4-4.el9.x86_64 4/8 2026-03-10T06:46:24.990 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 5/8 2026-03-10T06:46:24.990 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 6/8 2026-03-10T06:46:24.990 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : unzip-6.0-59.el9.x86_64 7/8 2026-03-10T06:46:25.019 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : zip-3.0-35.el9.x86_64 8/8 2026-03-10T06:46:25.019 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:46:25.019 INFO:teuthology.orchestra.run.vm01.stdout:Removed: 2026-03-10T06:46:25.019 INFO:teuthology.orchestra.run.vm01.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:25.019 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:25.019 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:25.019 INFO:teuthology.orchestra.run.vm01.stdout: lua-5.4.4-4.el9.x86_64 2026-03-10T06:46:25.019 INFO:teuthology.orchestra.run.vm01.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-10T06:46:25.019 INFO:teuthology.orchestra.run.vm01.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-10T06:46:25.019 INFO:teuthology.orchestra.run.vm01.stdout: unzip-6.0-59.el9.x86_64 2026-03-10T06:46:25.019 INFO:teuthology.orchestra.run.vm01.stdout: zip-3.0-35.el9.x86_64 2026-03-10T06:46:25.019 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:46:25.019 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T06:46:25.040 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : zip-3.0-35.el9.x86_64 8/8 2026-03-10T06:46:25.040 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:46:25.040 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-10T06:46:25.040 INFO:teuthology.orchestra.run.vm09.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:25.040 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:25.040 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:25.040 INFO:teuthology.orchestra.run.vm09.stdout: lua-5.4.4-4.el9.x86_64 2026-03-10T06:46:25.040 INFO:teuthology.orchestra.run.vm09.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-10T06:46:25.040 INFO:teuthology.orchestra.run.vm09.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-10T06:46:25.040 INFO:teuthology.orchestra.run.vm09.stdout: unzip-6.0-59.el9.x86_64 2026-03-10T06:46:25.040 INFO:teuthology.orchestra.run.vm09.stdout: zip-3.0-35.el9.x86_64 2026-03-10T06:46:25.040 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:46:25.040 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T06:46:25.214 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:46:25.219 INFO:teuthology.orchestra.run.vm08.stdout:=========================================================================================== 2026-03-10T06:46:25.219 INFO:teuthology.orchestra.run.vm08.stdout: Package Arch Version Repository Size 2026-03-10T06:46:25.219 INFO:teuthology.orchestra.run.vm08.stdout:=========================================================================================== 2026-03-10T06:46:25.219 INFO:teuthology.orchestra.run.vm08.stdout:Removing: 2026-03-10T06:46:25.219 INFO:teuthology.orchestra.run.vm08.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 23 M 2026-03-10T06:46:25.219 INFO:teuthology.orchestra.run.vm08.stdout:Removing dependent packages: 2026-03-10T06:46:25.219 INFO:teuthology.orchestra.run.vm08.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 431 k 2026-03-10T06:46:25.219 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.4 M 2026-03-10T06:46:25.219 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 806 k 2026-03-10T06:46:25.219 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 88 M 2026-03-10T06:46:25.219 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 66 M 2026-03-10T06:46:25.219 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 563 k 2026-03-10T06:46:25.219 INFO:teuthology.orchestra.run.vm08.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 59 M 2026-03-10T06:46:25.219 INFO:teuthology.orchestra.run.vm08.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.4 M 2026-03-10T06:46:25.219 INFO:teuthology.orchestra.run.vm08.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-10T06:46:25.219 INFO:teuthology.orchestra.run.vm08.stdout:Removing unused dependencies: 2026-03-10T06:46:25.219 INFO:teuthology.orchestra.run.vm08.stdout: abseil-cpp x86_64 20211102.0-4.el9 @epel 1.9 M 2026-03-10T06:46:25.219 INFO:teuthology.orchestra.run.vm08.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 85 M 2026-03-10T06:46:25.219 INFO:teuthology.orchestra.run.vm08.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 628 k 2026-03-10T06:46:25.219 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.5 M 2026-03-10T06:46:25.219 INFO:teuthology.orchestra.run.vm08.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 52 k 2026-03-10T06:46:25.219 INFO:teuthology.orchestra.run.vm08.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 138 k 2026-03-10T06:46:25.219 INFO:teuthology.orchestra.run.vm08.stdout: cryptsetup x86_64 2.8.1-3.el9 @baseos 770 k 2026-03-10T06:46:25.219 INFO:teuthology.orchestra.run.vm08.stdout: flexiblas x86_64 3.0.4-9.el9 @appstream 68 k 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 @appstream 11 M 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 @appstream 39 k 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: gperftools-libs x86_64 2.9.1-3.el9 @epel 1.4 M 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: grpc-data noarch 1.46.7-10.el9 @epel 13 k 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: ledmon-libs x86_64 1.1.0-3.el9 @baseos 80 k 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 425 k 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: libconfig x86_64 1.7.2-9.el9 @baseos 220 k 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: libgfortran x86_64 11.5.0-14.el9 @baseos 2.8 M 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: liboath x86_64 2.6.12-1.el9 @epel 94 k 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: libquadmath x86_64 11.5.0-14.el9 @baseos 330 k 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.6 M 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: libstoragemgmt x86_64 1.10.1-1.el9 @appstream 685 k 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: libunwind x86_64 1.6.2-1.el9 @epel 170 k 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: openblas x86_64 0.3.29-1.el9 @appstream 112 k 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: openblas-openmp x86_64 0.3.29-1.el9 @appstream 46 M 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: pciutils x86_64 3.7.0-7.el9 @baseos 216 k 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: protobuf x86_64 3.14.0-17.el9 @appstream 3.5 M 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: protobuf-compiler x86_64 3.14.0-17.el9 @crb 2.9 M 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: python3-asyncssh noarch 2.13.2-5.el9 @epel 3.9 M 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: python3-autocommand noarch 2.2.2-8.el9 @epel 82 k 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: python3-babel noarch 2.9.1-2.el9 @appstream 27 M 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 @epel 254 k 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: python3-bcrypt x86_64 3.2.2-1.el9 @epel 87 k 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: python3-cachetools noarch 4.2.4-1.el9 @epel 93 k 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 702 k 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: python3-certifi noarch 2023.05.07-4.el9 @epel 6.3 k 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: python3-cffi x86_64 1.14.5-5.el9 @baseos 1.0 M 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: python3-chardet noarch 4.0.0-5.el9 @anaconda 1.4 M 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: python3-cheroot noarch 10.0.1-4.el9 @epel 682 k 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: python3-cherrypy noarch 18.6.1-2.el9 @epel 1.1 M 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: python3-cryptography x86_64 36.0.1-5.el9 @baseos 4.5 M 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: python3-devel x86_64 3.9.25-3.el9 @appstream 765 k 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: python3-google-auth noarch 1:2.45.0-1.el9 @epel 1.4 M 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: python3-grpcio x86_64 1.46.7-10.el9 @epel 6.7 M 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 @epel 418 k 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: python3-idna noarch 2.10-7.el9.1 @anaconda 513 k 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco noarch 8.2.1-3.el9 @epel 3.7 k 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 @epel 24 k 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 @epel 55 k 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-context noarch 6.0.1-3.el9 @epel 31 k 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 @epel 33 k 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-text noarch 4.0.0-2.el9 @epel 51 k 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: python3-jinja2 noarch 2.11.3-8.el9 @appstream 1.1 M 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: python3-jsonpatch noarch 1.21-16.el9 @koji-override-0 55 k 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: python3-jsonpointer noarch 2.0-4.el9 @koji-override-0 34 k 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 @epel 21 M 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 @appstream 832 k 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: python3-logutils noarch 0.3.5-21.el9 @epel 126 k 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: python3-mako noarch 1.1.4-6.el9 @appstream 534 k 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: python3-markupsafe x86_64 1.1.1-12.el9 @appstream 60 k 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: python3-more-itertools noarch 8.12.0-2.el9 @epel 378 k 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: python3-natsort noarch 7.1.1-5.el9 @epel 215 k 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: python3-numpy x86_64 1:1.23.5-2.el9 @appstream 30 M 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 @appstream 1.7 M 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: python3-oauthlib noarch 3.1.1-5.el9 @koji-override-0 888 k 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: python3-packaging noarch 20.9-5.el9 @appstream 248 k 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: python3-pecan noarch 1.4.2-3.el9 @epel 1.3 M 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: python3-ply noarch 3.11-14.el9 @baseos 430 k 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: python3-portend noarch 3.1.0-2.el9 @epel 20 k 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: python3-prettytable noarch 0.7.2-27.el9 @koji-override-0 166 k 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: python3-protobuf noarch 3.14.0-17.el9 @appstream 1.4 M 2026-03-10T06:46:25.220 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 @epel 389 k 2026-03-10T06:46:25.221 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyasn1 noarch 0.4.8-7.el9 @appstream 622 k 2026-03-10T06:46:25.221 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 @appstream 1.0 M 2026-03-10T06:46:25.221 INFO:teuthology.orchestra.run.vm08.stdout: python3-pycparser noarch 2.20-6.el9 @baseos 745 k 2026-03-10T06:46:25.221 INFO:teuthology.orchestra.run.vm08.stdout: python3-pysocks noarch 1.7.1-12.el9 @anaconda 88 k 2026-03-10T06:46:25.221 INFO:teuthology.orchestra.run.vm08.stdout: python3-pytz noarch 2021.1-5.el9 @koji-override-0 176 k 2026-03-10T06:46:25.221 INFO:teuthology.orchestra.run.vm08.stdout: python3-repoze-lru noarch 0.7-16.el9 @epel 83 k 2026-03-10T06:46:25.221 INFO:teuthology.orchestra.run.vm08.stdout: python3-requests noarch 2.25.1-10.el9 @baseos 405 k 2026-03-10T06:46:25.221 INFO:teuthology.orchestra.run.vm08.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 @appstream 119 k 2026-03-10T06:46:25.221 INFO:teuthology.orchestra.run.vm08.stdout: python3-routes noarch 2.5.1-5.el9 @epel 459 k 2026-03-10T06:46:25.221 INFO:teuthology.orchestra.run.vm08.stdout: python3-rsa noarch 4.9-2.el9 @epel 202 k 2026-03-10T06:46:25.221 INFO:teuthology.orchestra.run.vm08.stdout: python3-scipy x86_64 1.9.3-2.el9 @appstream 76 M 2026-03-10T06:46:25.221 INFO:teuthology.orchestra.run.vm08.stdout: python3-tempora noarch 5.0.0-2.el9 @epel 96 k 2026-03-10T06:46:25.221 INFO:teuthology.orchestra.run.vm08.stdout: python3-toml noarch 0.10.2-6.el9 @appstream 99 k 2026-03-10T06:46:25.221 INFO:teuthology.orchestra.run.vm08.stdout: python3-typing-extensions noarch 4.15.0-1.el9 @epel 447 k 2026-03-10T06:46:25.221 INFO:teuthology.orchestra.run.vm08.stdout: python3-urllib3 noarch 1.26.5-7.el9 @baseos 746 k 2026-03-10T06:46:25.221 INFO:teuthology.orchestra.run.vm08.stdout: python3-webob noarch 1.8.8-2.el9 @epel 1.2 M 2026-03-10T06:46:25.221 INFO:teuthology.orchestra.run.vm08.stdout: python3-websocket-client noarch 1.2.3-2.el9 @epel 319 k 2026-03-10T06:46:25.221 INFO:teuthology.orchestra.run.vm08.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 @epel 1.9 M 2026-03-10T06:46:25.221 INFO:teuthology.orchestra.run.vm08.stdout: python3-zc-lockfile noarch 2.0-10.el9 @epel 35 k 2026-03-10T06:46:25.221 INFO:teuthology.orchestra.run.vm08.stdout: qatlib x86_64 25.08.0-2.el9 @appstream 639 k 2026-03-10T06:46:25.221 INFO:teuthology.orchestra.run.vm08.stdout: qatlib-service x86_64 25.08.0-2.el9 @appstream 69 k 2026-03-10T06:46:25.221 INFO:teuthology.orchestra.run.vm08.stdout: qatzip-libs x86_64 1.3.1-1.el9 @appstream 148 k 2026-03-10T06:46:25.221 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:46:25.221 INFO:teuthology.orchestra.run.vm08.stdout:Transaction Summary 2026-03-10T06:46:25.221 INFO:teuthology.orchestra.run.vm08.stdout:=========================================================================================== 2026-03-10T06:46:25.221 INFO:teuthology.orchestra.run.vm08.stdout:Remove 102 Packages 2026-03-10T06:46:25.221 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:46:25.221 INFO:teuthology.orchestra.run.vm08.stdout:Freed space: 613 M 2026-03-10T06:46:25.221 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction check 2026-03-10T06:46:25.228 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T06:46:25.234 INFO:teuthology.orchestra.run.vm01.stdout:=========================================================================================== 2026-03-10T06:46:25.234 INFO:teuthology.orchestra.run.vm01.stdout: Package Arch Version Repository Size 2026-03-10T06:46:25.234 INFO:teuthology.orchestra.run.vm01.stdout:=========================================================================================== 2026-03-10T06:46:25.234 INFO:teuthology.orchestra.run.vm01.stdout:Removing: 2026-03-10T06:46:25.234 INFO:teuthology.orchestra.run.vm01.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 23 M 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout:Removing dependent packages: 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 431 k 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.4 M 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 806 k 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 88 M 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 66 M 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 563 k 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 59 M 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.4 M 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout:Removing unused dependencies: 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout: abseil-cpp x86_64 20211102.0-4.el9 @epel 1.9 M 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 85 M 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 628 k 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.5 M 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 52 k 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 138 k 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout: cryptsetup x86_64 2.8.1-3.el9 @baseos 770 k 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout: flexiblas x86_64 3.0.4-9.el9 @appstream 68 k 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 @appstream 11 M 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 @appstream 39 k 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout: gperftools-libs x86_64 2.9.1-3.el9 @epel 1.4 M 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout: grpc-data noarch 1.46.7-10.el9 @epel 13 k 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout: ledmon-libs x86_64 1.1.0-3.el9 @baseos 80 k 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 425 k 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout: libconfig x86_64 1.7.2-9.el9 @baseos 220 k 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout: libgfortran x86_64 11.5.0-14.el9 @baseos 2.8 M 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout: liboath x86_64 2.6.12-1.el9 @epel 94 k 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout: libquadmath x86_64 11.5.0-14.el9 @baseos 330 k 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.6 M 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout: libstoragemgmt x86_64 1.10.1-1.el9 @appstream 685 k 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout: libunwind x86_64 1.6.2-1.el9 @epel 170 k 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout: openblas x86_64 0.3.29-1.el9 @appstream 112 k 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout: openblas-openmp x86_64 0.3.29-1.el9 @appstream 46 M 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout: pciutils x86_64 3.7.0-7.el9 @baseos 216 k 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout: protobuf x86_64 3.14.0-17.el9 @appstream 3.5 M 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout: protobuf-compiler x86_64 3.14.0-17.el9 @crb 2.9 M 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout: python3-asyncssh noarch 2.13.2-5.el9 @epel 3.9 M 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout: python3-autocommand noarch 2.2.2-8.el9 @epel 82 k 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout: python3-babel noarch 2.9.1-2.el9 @appstream 27 M 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 @epel 254 k 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout: python3-bcrypt x86_64 3.2.2-1.el9 @epel 87 k 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout: python3-cachetools noarch 4.2.4-1.el9 @epel 93 k 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 702 k 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout: python3-certifi noarch 2023.05.07-4.el9 @epel 6.3 k 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout: python3-cffi x86_64 1.14.5-5.el9 @baseos 1.0 M 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout: python3-chardet noarch 4.0.0-5.el9 @anaconda 1.4 M 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout: python3-cheroot noarch 10.0.1-4.el9 @epel 682 k 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout: python3-cherrypy noarch 18.6.1-2.el9 @epel 1.1 M 2026-03-10T06:46:25.235 INFO:teuthology.orchestra.run.vm01.stdout: python3-cryptography x86_64 36.0.1-5.el9 @baseos 4.5 M 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-devel x86_64 3.9.25-3.el9 @appstream 765 k 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-google-auth noarch 1:2.45.0-1.el9 @epel 1.4 M 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-grpcio x86_64 1.46.7-10.el9 @epel 6.7 M 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 @epel 418 k 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-idna noarch 2.10-7.el9.1 @anaconda 513 k 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco noarch 8.2.1-3.el9 @epel 3.7 k 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 @epel 24 k 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 @epel 55 k 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-context noarch 6.0.1-3.el9 @epel 31 k 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 @epel 33 k 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-text noarch 4.0.0-2.el9 @epel 51 k 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-jinja2 noarch 2.11.3-8.el9 @appstream 1.1 M 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-jsonpatch noarch 1.21-16.el9 @koji-override-0 55 k 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-jsonpointer noarch 2.0-4.el9 @koji-override-0 34 k 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 @epel 21 M 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 @appstream 832 k 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-logutils noarch 0.3.5-21.el9 @epel 126 k 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-mako noarch 1.1.4-6.el9 @appstream 534 k 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-markupsafe x86_64 1.1.1-12.el9 @appstream 60 k 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-more-itertools noarch 8.12.0-2.el9 @epel 378 k 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-natsort noarch 7.1.1-5.el9 @epel 215 k 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-numpy x86_64 1:1.23.5-2.el9 @appstream 30 M 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 @appstream 1.7 M 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-oauthlib noarch 3.1.1-5.el9 @koji-override-0 888 k 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-packaging noarch 20.9-5.el9 @appstream 248 k 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-pecan noarch 1.4.2-3.el9 @epel 1.3 M 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-ply noarch 3.11-14.el9 @baseos 430 k 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-portend noarch 3.1.0-2.el9 @epel 20 k 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-prettytable noarch 0.7.2-27.el9 @koji-override-0 166 k 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-protobuf noarch 3.14.0-17.el9 @appstream 1.4 M 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 @epel 389 k 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyasn1 noarch 0.4.8-7.el9 @appstream 622 k 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 @appstream 1.0 M 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-pycparser noarch 2.20-6.el9 @baseos 745 k 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-pysocks noarch 1.7.1-12.el9 @anaconda 88 k 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-pytz noarch 2021.1-5.el9 @koji-override-0 176 k 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-repoze-lru noarch 0.7-16.el9 @epel 83 k 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-requests noarch 2.25.1-10.el9 @baseos 405 k 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 @appstream 119 k 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-routes noarch 2.5.1-5.el9 @epel 459 k 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-rsa noarch 4.9-2.el9 @epel 202 k 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-scipy x86_64 1.9.3-2.el9 @appstream 76 M 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-tempora noarch 5.0.0-2.el9 @epel 96 k 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-toml noarch 0.10.2-6.el9 @appstream 99 k 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-typing-extensions noarch 4.15.0-1.el9 @epel 447 k 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-urllib3 noarch 1.26.5-7.el9 @baseos 746 k 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-webob noarch 1.8.8-2.el9 @epel 1.2 M 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-websocket-client noarch 1.2.3-2.el9 @epel 319 k 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 @epel 1.9 M 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: python3-zc-lockfile noarch 2.0-10.el9 @epel 35 k 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: qatlib x86_64 25.08.0-2.el9 @appstream 639 k 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: qatlib-service x86_64 25.08.0-2.el9 @appstream 69 k 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: qatzip-libs x86_64 1.3.1-1.el9 @appstream 148 k 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout:Transaction Summary 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout:=========================================================================================== 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout:Remove 102 Packages 2026-03-10T06:46:25.236 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:46:25.237 INFO:teuthology.orchestra.run.vm01.stdout:Freed space: 613 M 2026-03-10T06:46:25.237 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction check 2026-03-10T06:46:25.238 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T06:46:25.244 INFO:teuthology.orchestra.run.vm09.stdout:=========================================================================================== 2026-03-10T06:46:25.244 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-10T06:46:25.244 INFO:teuthology.orchestra.run.vm09.stdout:=========================================================================================== 2026-03-10T06:46:25.244 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-10T06:46:25.244 INFO:teuthology.orchestra.run.vm09.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 23 M 2026-03-10T06:46:25.244 INFO:teuthology.orchestra.run.vm09.stdout:Removing dependent packages: 2026-03-10T06:46:25.244 INFO:teuthology.orchestra.run.vm09.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 431 k 2026-03-10T06:46:25.244 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.4 M 2026-03-10T06:46:25.244 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 806 k 2026-03-10T06:46:25.244 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 88 M 2026-03-10T06:46:25.244 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 66 M 2026-03-10T06:46:25.244 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 563 k 2026-03-10T06:46:25.244 INFO:teuthology.orchestra.run.vm09.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 59 M 2026-03-10T06:46:25.244 INFO:teuthology.orchestra.run.vm09.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.4 M 2026-03-10T06:46:25.244 INFO:teuthology.orchestra.run.vm09.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-10T06:46:25.244 INFO:teuthology.orchestra.run.vm09.stdout:Removing unused dependencies: 2026-03-10T06:46:25.244 INFO:teuthology.orchestra.run.vm09.stdout: abseil-cpp x86_64 20211102.0-4.el9 @epel 1.9 M 2026-03-10T06:46:25.244 INFO:teuthology.orchestra.run.vm09.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 85 M 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 628 k 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.5 M 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 52 k 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 138 k 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: cryptsetup x86_64 2.8.1-3.el9 @baseos 770 k 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas x86_64 3.0.4-9.el9 @appstream 68 k 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 @appstream 11 M 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 @appstream 39 k 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: gperftools-libs x86_64 2.9.1-3.el9 @epel 1.4 M 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: grpc-data noarch 1.46.7-10.el9 @epel 13 k 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: ledmon-libs x86_64 1.1.0-3.el9 @baseos 80 k 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 425 k 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: libconfig x86_64 1.7.2-9.el9 @baseos 220 k 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: libgfortran x86_64 11.5.0-14.el9 @baseos 2.8 M 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: liboath x86_64 2.6.12-1.el9 @epel 94 k 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: libquadmath x86_64 11.5.0-14.el9 @baseos 330 k 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.6 M 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: libstoragemgmt x86_64 1.10.1-1.el9 @appstream 685 k 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: libunwind x86_64 1.6.2-1.el9 @epel 170 k 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: openblas x86_64 0.3.29-1.el9 @appstream 112 k 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: openblas-openmp x86_64 0.3.29-1.el9 @appstream 46 M 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: pciutils x86_64 3.7.0-7.el9 @baseos 216 k 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: protobuf x86_64 3.14.0-17.el9 @appstream 3.5 M 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: protobuf-compiler x86_64 3.14.0-17.el9 @crb 2.9 M 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: python3-asyncssh noarch 2.13.2-5.el9 @epel 3.9 M 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: python3-autocommand noarch 2.2.2-8.el9 @epel 82 k 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: python3-babel noarch 2.9.1-2.el9 @appstream 27 M 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 @epel 254 k 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: python3-bcrypt x86_64 3.2.2-1.el9 @epel 87 k 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: python3-cachetools noarch 4.2.4-1.el9 @epel 93 k 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 702 k 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: python3-certifi noarch 2023.05.07-4.el9 @epel 6.3 k 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: python3-cffi x86_64 1.14.5-5.el9 @baseos 1.0 M 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: python3-chardet noarch 4.0.0-5.el9 @anaconda 1.4 M 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: python3-cheroot noarch 10.0.1-4.el9 @epel 682 k 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy noarch 18.6.1-2.el9 @epel 1.1 M 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: python3-cryptography x86_64 36.0.1-5.el9 @baseos 4.5 M 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: python3-devel x86_64 3.9.25-3.el9 @appstream 765 k 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: python3-google-auth noarch 1:2.45.0-1.el9 @epel 1.4 M 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: python3-grpcio x86_64 1.46.7-10.el9 @epel 6.7 M 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 @epel 418 k 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: python3-idna noarch 2.10-7.el9.1 @anaconda 513 k 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco noarch 8.2.1-3.el9 @epel 3.7 k 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 @epel 24 k 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 @epel 55 k 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-context noarch 6.0.1-3.el9 @epel 31 k 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 @epel 33 k 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-text noarch 4.0.0-2.el9 @epel 51 k 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: python3-jinja2 noarch 2.11.3-8.el9 @appstream 1.1 M 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: python3-jsonpatch noarch 1.21-16.el9 @koji-override-0 55 k 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: python3-jsonpointer noarch 2.0-4.el9 @koji-override-0 34 k 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 @epel 21 M 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 @appstream 832 k 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: python3-logutils noarch 0.3.5-21.el9 @epel 126 k 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: python3-mako noarch 1.1.4-6.el9 @appstream 534 k 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: python3-markupsafe x86_64 1.1.1-12.el9 @appstream 60 k 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: python3-more-itertools noarch 8.12.0-2.el9 @epel 378 k 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: python3-natsort noarch 7.1.1-5.el9 @epel 215 k 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy x86_64 1:1.23.5-2.el9 @appstream 30 M 2026-03-10T06:46:25.245 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 @appstream 1.7 M 2026-03-10T06:46:25.246 INFO:teuthology.orchestra.run.vm09.stdout: python3-oauthlib noarch 3.1.1-5.el9 @koji-override-0 888 k 2026-03-10T06:46:25.246 INFO:teuthology.orchestra.run.vm09.stdout: python3-packaging noarch 20.9-5.el9 @appstream 248 k 2026-03-10T06:46:25.246 INFO:teuthology.orchestra.run.vm09.stdout: python3-pecan noarch 1.4.2-3.el9 @epel 1.3 M 2026-03-10T06:46:25.246 INFO:teuthology.orchestra.run.vm09.stdout: python3-ply noarch 3.11-14.el9 @baseos 430 k 2026-03-10T06:46:25.246 INFO:teuthology.orchestra.run.vm09.stdout: python3-portend noarch 3.1.0-2.el9 @epel 20 k 2026-03-10T06:46:25.246 INFO:teuthology.orchestra.run.vm09.stdout: python3-prettytable noarch 0.7.2-27.el9 @koji-override-0 166 k 2026-03-10T06:46:25.246 INFO:teuthology.orchestra.run.vm09.stdout: python3-protobuf noarch 3.14.0-17.el9 @appstream 1.4 M 2026-03-10T06:46:25.246 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 @epel 389 k 2026-03-10T06:46:25.246 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1 noarch 0.4.8-7.el9 @appstream 622 k 2026-03-10T06:46:25.246 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 @appstream 1.0 M 2026-03-10T06:46:25.246 INFO:teuthology.orchestra.run.vm09.stdout: python3-pycparser noarch 2.20-6.el9 @baseos 745 k 2026-03-10T06:46:25.246 INFO:teuthology.orchestra.run.vm09.stdout: python3-pysocks noarch 1.7.1-12.el9 @anaconda 88 k 2026-03-10T06:46:25.246 INFO:teuthology.orchestra.run.vm09.stdout: python3-pytz noarch 2021.1-5.el9 @koji-override-0 176 k 2026-03-10T06:46:25.246 INFO:teuthology.orchestra.run.vm09.stdout: python3-repoze-lru noarch 0.7-16.el9 @epel 83 k 2026-03-10T06:46:25.246 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests noarch 2.25.1-10.el9 @baseos 405 k 2026-03-10T06:46:25.246 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 @appstream 119 k 2026-03-10T06:46:25.246 INFO:teuthology.orchestra.run.vm09.stdout: python3-routes noarch 2.5.1-5.el9 @epel 459 k 2026-03-10T06:46:25.246 INFO:teuthology.orchestra.run.vm09.stdout: python3-rsa noarch 4.9-2.el9 @epel 202 k 2026-03-10T06:46:25.246 INFO:teuthology.orchestra.run.vm09.stdout: python3-scipy x86_64 1.9.3-2.el9 @appstream 76 M 2026-03-10T06:46:25.246 INFO:teuthology.orchestra.run.vm09.stdout: python3-tempora noarch 5.0.0-2.el9 @epel 96 k 2026-03-10T06:46:25.246 INFO:teuthology.orchestra.run.vm09.stdout: python3-toml noarch 0.10.2-6.el9 @appstream 99 k 2026-03-10T06:46:25.246 INFO:teuthology.orchestra.run.vm09.stdout: python3-typing-extensions noarch 4.15.0-1.el9 @epel 447 k 2026-03-10T06:46:25.246 INFO:teuthology.orchestra.run.vm09.stdout: python3-urllib3 noarch 1.26.5-7.el9 @baseos 746 k 2026-03-10T06:46:25.246 INFO:teuthology.orchestra.run.vm09.stdout: python3-webob noarch 1.8.8-2.el9 @epel 1.2 M 2026-03-10T06:46:25.246 INFO:teuthology.orchestra.run.vm09.stdout: python3-websocket-client noarch 1.2.3-2.el9 @epel 319 k 2026-03-10T06:46:25.246 INFO:teuthology.orchestra.run.vm09.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 @epel 1.9 M 2026-03-10T06:46:25.246 INFO:teuthology.orchestra.run.vm09.stdout: python3-zc-lockfile noarch 2.0-10.el9 @epel 35 k 2026-03-10T06:46:25.246 INFO:teuthology.orchestra.run.vm09.stdout: qatlib x86_64 25.08.0-2.el9 @appstream 639 k 2026-03-10T06:46:25.246 INFO:teuthology.orchestra.run.vm09.stdout: qatlib-service x86_64 25.08.0-2.el9 @appstream 69 k 2026-03-10T06:46:25.246 INFO:teuthology.orchestra.run.vm09.stdout: qatzip-libs x86_64 1.3.1-1.el9 @appstream 148 k 2026-03-10T06:46:25.246 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:46:25.246 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-10T06:46:25.246 INFO:teuthology.orchestra.run.vm09.stdout:=========================================================================================== 2026-03-10T06:46:25.246 INFO:teuthology.orchestra.run.vm09.stdout:Remove 102 Packages 2026-03-10T06:46:25.246 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:46:25.246 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 613 M 2026-03-10T06:46:25.246 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-10T06:46:25.250 INFO:teuthology.orchestra.run.vm08.stdout:Transaction check succeeded. 2026-03-10T06:46:25.250 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction test 2026-03-10T06:46:25.263 INFO:teuthology.orchestra.run.vm01.stdout:Transaction check succeeded. 2026-03-10T06:46:25.263 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction test 2026-03-10T06:46:25.272 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-10T06:46:25.272 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-10T06:46:25.355 INFO:teuthology.orchestra.run.vm08.stdout:Transaction test succeeded. 2026-03-10T06:46:25.355 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction 2026-03-10T06:46:25.370 INFO:teuthology.orchestra.run.vm01.stdout:Transaction test succeeded. 2026-03-10T06:46:25.371 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction 2026-03-10T06:46:25.378 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-10T06:46:25.378 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-10T06:46:25.499 INFO:teuthology.orchestra.run.vm08.stdout: Preparing : 1/1 2026-03-10T06:46:25.500 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/102 2026-03-10T06:46:25.509 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/102 2026-03-10T06:46:25.520 INFO:teuthology.orchestra.run.vm01.stdout: Preparing : 1/1 2026-03-10T06:46:25.520 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/102 2026-03-10T06:46:25.528 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/102 2026-03-10T06:46:25.529 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-10T06:46:25.529 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/102 2026-03-10T06:46:25.531 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T06:46:25.531 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:46:25.531 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-10T06:46:25.531 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mgr.target". 2026-03-10T06:46:25.531 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mgr.target". 2026-03-10T06:46:25.531 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:46:25.531 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T06:46:25.536 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/102 2026-03-10T06:46:25.545 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T06:46:25.549 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T06:46:25.549 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:46:25.549 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-10T06:46:25.549 INFO:teuthology.orchestra.run.vm01.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mgr.target". 2026-03-10T06:46:25.549 INFO:teuthology.orchestra.run.vm01.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mgr.target". 2026-03-10T06:46:25.549 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:46:25.550 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T06:46:25.553 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T06:46:25.553 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:46:25.553 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-10T06:46:25.553 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mgr.target". 2026-03-10T06:46:25.553 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mgr.target". 2026-03-10T06:46:25.553 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:46:25.553 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T06:46:25.564 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T06:46:25.566 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T06:46:25.570 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 3/102 2026-03-10T06:46:25.570 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/102 2026-03-10T06:46:25.588 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 3/102 2026-03-10T06:46:25.588 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/102 2026-03-10T06:46:25.589 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 3/102 2026-03-10T06:46:25.589 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/102 2026-03-10T06:46:25.628 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/102 2026-03-10T06:46:25.636 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-kubernetes-1:26.1.0-3.el9.noarch 5/102 2026-03-10T06:46:25.640 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-requests-oauthlib-1.3.0-12.el9.noarch 6/102 2026-03-10T06:46:25.640 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-10T06:46:25.646 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/102 2026-03-10T06:46:25.646 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/102 2026-03-10T06:46:25.651 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-10T06:46:25.655 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-kubernetes-1:26.1.0-3.el9.noarch 5/102 2026-03-10T06:46:25.656 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-kubernetes-1:26.1.0-3.el9.noarch 5/102 2026-03-10T06:46:25.657 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-cherrypy-18.6.1-2.el9.noarch 8/102 2026-03-10T06:46:25.660 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-requests-oauthlib-1.3.0-12.el9.noarch 6/102 2026-03-10T06:46:25.660 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-10T06:46:25.660 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-requests-oauthlib-1.3.0-12.el9.noarch 6/102 2026-03-10T06:46:25.660 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-10T06:46:25.661 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-cheroot-10.0.1-4.el9.noarch 9/102 2026-03-10T06:46:25.669 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-grpcio-tools-1.46.7-10.el9.x86_64 10/102 2026-03-10T06:46:25.673 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-10T06:46:25.673 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-grpcio-1.46.7-10.el9.x86_64 11/102 2026-03-10T06:46:25.674 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-10T06:46:25.681 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-cherrypy-18.6.1-2.el9.noarch 8/102 2026-03-10T06:46:25.681 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-cherrypy-18.6.1-2.el9.noarch 8/102 2026-03-10T06:46:25.685 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-cheroot-10.0.1-4.el9.noarch 9/102 2026-03-10T06:46:25.686 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-cheroot-10.0.1-4.el9.noarch 9/102 2026-03-10T06:46:25.694 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-grpcio-tools-1.46.7-10.el9.x86_64 10/102 2026-03-10T06:46:25.695 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-grpcio-tools-1.46.7-10.el9.x86_64 10/102 2026-03-10T06:46:25.695 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T06:46:25.695 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:46:25.695 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-10T06:46:25.695 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-osd.target". 2026-03-10T06:46:25.695 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-osd.target". 2026-03-10T06:46:25.695 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:46:25.698 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-grpcio-1.46.7-10.el9.x86_64 11/102 2026-03-10T06:46:25.699 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-grpcio-1.46.7-10.el9.x86_64 11/102 2026-03-10T06:46:25.701 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T06:46:25.709 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T06:46:25.718 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T06:46:25.718 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:46:25.718 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-10T06:46:25.718 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-osd.target". 2026-03-10T06:46:25.718 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-osd.target". 2026-03-10T06:46:25.718 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:46:25.719 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T06:46:25.719 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:46:25.719 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-10T06:46:25.719 INFO:teuthology.orchestra.run.vm01.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-osd.target". 2026-03-10T06:46:25.719 INFO:teuthology.orchestra.run.vm01.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-osd.target". 2026-03-10T06:46:25.719 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:46:25.723 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T06:46:25.724 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T06:46:25.727 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-10T06:46:25.727 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:46:25.727 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-10T06:46:25.727 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:46:25.733 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T06:46:25.733 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T06:46:25.735 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-10T06:46:25.745 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-10T06:46:25.748 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-10T06:46:25.748 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:46:25.748 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-10T06:46:25.748 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:46:25.748 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-10T06:46:25.748 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:46:25.748 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-10T06:46:25.748 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:46:25.749 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-jaraco-collections-3.0.0-8.el9.noarch 14/102 2026-03-10T06:46:25.754 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-jaraco-text-4.0.0-2.el9.noarch 15/102 2026-03-10T06:46:25.756 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-10T06:46:25.756 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-10T06:46:25.759 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-jinja2-2.11.3-8.el9.noarch 16/102 2026-03-10T06:46:25.768 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-10T06:46:25.768 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-10T06:46:25.769 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-requests-2.25.1-10.el9.noarch 17/102 2026-03-10T06:46:25.770 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-jaraco-collections-3.0.0-8.el9.noarch 14/102 2026-03-10T06:46:25.771 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jaraco-collections-3.0.0-8.el9.noarch 14/102 2026-03-10T06:46:25.776 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-jaraco-text-4.0.0-2.el9.noarch 15/102 2026-03-10T06:46:25.776 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jaraco-text-4.0.0-2.el9.noarch 15/102 2026-03-10T06:46:25.781 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jinja2-2.11.3-8.el9.noarch 16/102 2026-03-10T06:46:25.781 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-jinja2-2.11.3-8.el9.noarch 16/102 2026-03-10T06:46:25.781 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-google-auth-1:2.45.0-1.el9.noarch 18/102 2026-03-10T06:46:25.787 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-pecan-1.4.2-3.el9.noarch 19/102 2026-03-10T06:46:25.790 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-requests-2.25.1-10.el9.noarch 17/102 2026-03-10T06:46:25.790 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-requests-2.25.1-10.el9.noarch 17/102 2026-03-10T06:46:25.797 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-rsa-4.9-2.el9.noarch 20/102 2026-03-10T06:46:25.803 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-google-auth-1:2.45.0-1.el9.noarch 18/102 2026-03-10T06:46:25.803 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-google-auth-1:2.45.0-1.el9.noarch 18/102 2026-03-10T06:46:25.804 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-pyasn1-modules-0.4.8-7.el9.noarch 21/102 2026-03-10T06:46:25.810 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-pecan-1.4.2-3.el9.noarch 19/102 2026-03-10T06:46:25.810 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pecan-1.4.2-3.el9.noarch 19/102 2026-03-10T06:46:25.820 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-rsa-4.9-2.el9.noarch 20/102 2026-03-10T06:46:25.820 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-rsa-4.9-2.el9.noarch 20/102 2026-03-10T06:46:25.827 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-pyasn1-modules-0.4.8-7.el9.noarch 21/102 2026-03-10T06:46:25.827 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pyasn1-modules-0.4.8-7.el9.noarch 21/102 2026-03-10T06:46:25.835 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-urllib3-1.26.5-7.el9.noarch 22/102 2026-03-10T06:46:25.842 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-babel-2.9.1-2.el9.noarch 23/102 2026-03-10T06:46:25.844 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-jaraco-classes-3.2.1-5.el9.noarch 24/102 2026-03-10T06:46:25.855 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-pyOpenSSL-21.0.0-1.el9.noarch 25/102 2026-03-10T06:46:25.858 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-urllib3-1.26.5-7.el9.noarch 22/102 2026-03-10T06:46:25.858 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-urllib3-1.26.5-7.el9.noarch 22/102 2026-03-10T06:46:25.866 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-babel-2.9.1-2.el9.noarch 23/102 2026-03-10T06:46:25.867 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-babel-2.9.1-2.el9.noarch 23/102 2026-03-10T06:46:25.868 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-asyncssh-2.13.2-5.el9.noarch 26/102 2026-03-10T06:46:25.868 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/102 2026-03-10T06:46:25.869 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jaraco-classes-3.2.1-5.el9.noarch 24/102 2026-03-10T06:46:25.870 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-jaraco-classes-3.2.1-5.el9.noarch 24/102 2026-03-10T06:46:25.876 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/102 2026-03-10T06:46:25.879 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pyOpenSSL-21.0.0-1.el9.noarch 25/102 2026-03-10T06:46:25.879 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-pyOpenSSL-21.0.0-1.el9.noarch 25/102 2026-03-10T06:46:25.890 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-asyncssh-2.13.2-5.el9.noarch 26/102 2026-03-10T06:46:25.890 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/102 2026-03-10T06:46:25.890 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-asyncssh-2.13.2-5.el9.noarch 26/102 2026-03-10T06:46:25.890 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/102 2026-03-10T06:46:25.898 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/102 2026-03-10T06:46:25.899 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/102 2026-03-10T06:46:25.974 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-jsonpatch-1.21-16.el9.noarch 28/102 2026-03-10T06:46:25.990 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-scipy-1.9.3-2.el9.x86_64 29/102 2026-03-10T06:46:25.996 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jsonpatch-1.21-16.el9.noarch 28/102 2026-03-10T06:46:25.998 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-jsonpatch-1.21-16.el9.noarch 28/102 2026-03-10T06:46:26.005 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-10T06:46:26.005 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/multi-user.target.wants/libstoragemgmt.service". 2026-03-10T06:46:26.005 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:46:26.006 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-10T06:46:26.013 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-scipy-1.9.3-2.el9.x86_64 29/102 2026-03-10T06:46:26.014 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-scipy-1.9.3-2.el9.x86_64 29/102 2026-03-10T06:46:26.028 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-10T06:46:26.028 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/libstoragemgmt.service". 2026-03-10T06:46:26.028 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:46:26.029 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-10T06:46:26.029 INFO:teuthology.orchestra.run.vm01.stdout:Removed "/etc/systemd/system/multi-user.target.wants/libstoragemgmt.service". 2026-03-10T06:46:26.029 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:46:26.030 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-10T06:46:26.030 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-10T06:46:26.034 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-10T06:46:26.050 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 31/102 2026-03-10T06:46:26.056 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-cryptography-36.0.1-5.el9.x86_64 32/102 2026-03-10T06:46:26.057 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-10T06:46:26.057 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-10T06:46:26.058 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : protobuf-compiler-3.14.0-17.el9.x86_64 33/102 2026-03-10T06:46:26.061 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-bcrypt-3.2.2-1.el9.x86_64 34/102 2026-03-10T06:46:26.073 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 31/102 2026-03-10T06:46:26.074 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 31/102 2026-03-10T06:46:26.079 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-cryptography-36.0.1-5.el9.x86_64 32/102 2026-03-10T06:46:26.080 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-cryptography-36.0.1-5.el9.x86_64 32/102 2026-03-10T06:46:26.081 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-10T06:46:26.081 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:46:26.081 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-10T06:46:26.081 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target". 2026-03-10T06:46:26.081 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target". 2026-03-10T06:46:26.081 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:46:26.082 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : protobuf-compiler-3.14.0-17.el9.x86_64 33/102 2026-03-10T06:46:26.082 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-10T06:46:26.083 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : protobuf-compiler-3.14.0-17.el9.x86_64 33/102 2026-03-10T06:46:26.084 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-bcrypt-3.2.2-1.el9.x86_64 34/102 2026-03-10T06:46:26.085 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-bcrypt-3.2.2-1.el9.x86_64 34/102 2026-03-10T06:46:26.095 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-10T06:46:26.099 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-mako-1.1.4-6.el9.noarch 36/102 2026-03-10T06:46:26.101 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-jaraco-context-6.0.1-3.el9.noarch 37/102 2026-03-10T06:46:26.103 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-portend-3.1.0-2.el9.noarch 38/102 2026-03-10T06:46:26.104 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-10T06:46:26.104 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:46:26.104 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-10T06:46:26.104 INFO:teuthology.orchestra.run.vm01.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target". 2026-03-10T06:46:26.104 INFO:teuthology.orchestra.run.vm01.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target". 2026-03-10T06:46:26.104 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:46:26.105 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-10T06:46:26.105 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:46:26.105 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-10T06:46:26.105 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target". 2026-03-10T06:46:26.105 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target". 2026-03-10T06:46:26.105 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:46:26.105 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-10T06:46:26.106 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-tempora-5.0.0-2.el9.noarch 39/102 2026-03-10T06:46:26.106 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-10T06:46:26.109 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-jaraco-functools-3.5.0-2.el9.noarch 40/102 2026-03-10T06:46:26.113 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-routes-2.5.1-5.el9.noarch 41/102 2026-03-10T06:46:26.117 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-10T06:46:26.118 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-cffi-1.14.5-5.el9.x86_64 42/102 2026-03-10T06:46:26.120 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-10T06:46:26.121 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-mako-1.1.4-6.el9.noarch 36/102 2026-03-10T06:46:26.124 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-jaraco-context-6.0.1-3.el9.noarch 37/102 2026-03-10T06:46:26.124 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-mako-1.1.4-6.el9.noarch 36/102 2026-03-10T06:46:26.126 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jaraco-context-6.0.1-3.el9.noarch 37/102 2026-03-10T06:46:26.126 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-portend-3.1.0-2.el9.noarch 38/102 2026-03-10T06:46:26.129 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-portend-3.1.0-2.el9.noarch 38/102 2026-03-10T06:46:26.129 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-tempora-5.0.0-2.el9.noarch 39/102 2026-03-10T06:46:26.131 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-tempora-5.0.0-2.el9.noarch 39/102 2026-03-10T06:46:26.133 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-jaraco-functools-3.5.0-2.el9.noarch 40/102 2026-03-10T06:46:26.135 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jaraco-functools-3.5.0-2.el9.noarch 40/102 2026-03-10T06:46:26.136 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-routes-2.5.1-5.el9.noarch 41/102 2026-03-10T06:46:26.139 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-routes-2.5.1-5.el9.noarch 41/102 2026-03-10T06:46:26.141 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-cffi-1.14.5-5.el9.x86_64 42/102 2026-03-10T06:46:26.143 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-cffi-1.14.5-5.el9.x86_64 42/102 2026-03-10T06:46:26.166 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-pycparser-2.20-6.el9.noarch 43/102 2026-03-10T06:46:26.178 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-numpy-1:1.23.5-2.el9.x86_64 44/102 2026-03-10T06:46:26.180 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : flexiblas-netlib-3.0.4-9.el9.x86_64 45/102 2026-03-10T06:46:26.185 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 46/102 2026-03-10T06:46:26.187 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : openblas-openmp-0.3.29-1.el9.x86_64 47/102 2026-03-10T06:46:26.189 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-pycparser-2.20-6.el9.noarch 43/102 2026-03-10T06:46:26.191 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libgfortran-11.5.0-14.el9.x86_64 48/102 2026-03-10T06:46:26.192 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pycparser-2.20-6.el9.noarch 43/102 2026-03-10T06:46:26.193 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 49/102 2026-03-10T06:46:26.204 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-numpy-1:1.23.5-2.el9.x86_64 44/102 2026-03-10T06:46:26.205 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-numpy-1:1.23.5-2.el9.x86_64 44/102 2026-03-10T06:46:26.206 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : flexiblas-netlib-3.0.4-9.el9.x86_64 45/102 2026-03-10T06:46:26.207 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : flexiblas-netlib-3.0.4-9.el9.x86_64 45/102 2026-03-10T06:46:26.212 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 46/102 2026-03-10T06:46:26.213 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 46/102 2026-03-10T06:46:26.214 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : openblas-openmp-0.3.29-1.el9.x86_64 47/102 2026-03-10T06:46:26.214 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-10T06:46:26.214 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:46:26.214 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-10T06:46:26.214 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:46:26.215 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-10T06:46:26.215 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : openblas-openmp-0.3.29-1.el9.x86_64 47/102 2026-03-10T06:46:26.217 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : libgfortran-11.5.0-14.el9.x86_64 48/102 2026-03-10T06:46:26.218 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libgfortran-11.5.0-14.el9.x86_64 48/102 2026-03-10T06:46:26.221 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 49/102 2026-03-10T06:46:26.221 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 49/102 2026-03-10T06:46:26.223 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-10T06:46:26.224 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : openblas-0.3.29-1.el9.x86_64 51/102 2026-03-10T06:46:26.227 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : flexiblas-3.0.4-9.el9.x86_64 52/102 2026-03-10T06:46:26.229 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-ply-3.11-14.el9.noarch 53/102 2026-03-10T06:46:26.231 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-repoze-lru-0.7-16.el9.noarch 54/102 2026-03-10T06:46:26.233 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-jaraco-8.2.1-3.el9.noarch 55/102 2026-03-10T06:46:26.235 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-more-itertools-8.12.0-2.el9.noarch 56/102 2026-03-10T06:46:26.238 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-toml-0.10.2-6.el9.noarch 57/102 2026-03-10T06:46:26.241 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-pytz-2021.1-5.el9.noarch 58/102 2026-03-10T06:46:26.242 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-10T06:46:26.242 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:46:26.242 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-10T06:46:26.242 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:46:26.242 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-10T06:46:26.242 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:46:26.242 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-10T06:46:26.242 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:46:26.242 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-10T06:46:26.243 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-10T06:46:26.249 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-backports-tarfile-1.2.0-1.el9.noarch 59/102 2026-03-10T06:46:26.251 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-10T06:46:26.252 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-10T06:46:26.253 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : openblas-0.3.29-1.el9.x86_64 51/102 2026-03-10T06:46:26.254 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : openblas-0.3.29-1.el9.x86_64 51/102 2026-03-10T06:46:26.254 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-devel-3.9.25-3.el9.x86_64 60/102 2026-03-10T06:46:26.256 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : flexiblas-3.0.4-9.el9.x86_64 52/102 2026-03-10T06:46:26.256 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : flexiblas-3.0.4-9.el9.x86_64 52/102 2026-03-10T06:46:26.256 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-jsonpointer-2.0-4.el9.noarch 61/102 2026-03-10T06:46:26.259 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-typing-extensions-4.15.0-1.el9.noarch 62/102 2026-03-10T06:46:26.259 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-ply-3.11-14.el9.noarch 53/102 2026-03-10T06:46:26.260 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-ply-3.11-14.el9.noarch 53/102 2026-03-10T06:46:26.262 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-repoze-lru-0.7-16.el9.noarch 54/102 2026-03-10T06:46:26.262 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-idna-2.10-7.el9.1.noarch 63/102 2026-03-10T06:46:26.262 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-repoze-lru-0.7-16.el9.noarch 54/102 2026-03-10T06:46:26.265 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-jaraco-8.2.1-3.el9.noarch 55/102 2026-03-10T06:46:26.265 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jaraco-8.2.1-3.el9.noarch 55/102 2026-03-10T06:46:26.267 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-more-itertools-8.12.0-2.el9.noarch 56/102 2026-03-10T06:46:26.268 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-pysocks-1.7.1-12.el9.noarch 64/102 2026-03-10T06:46:26.268 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-more-itertools-8.12.0-2.el9.noarch 56/102 2026-03-10T06:46:26.270 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-toml-0.10.2-6.el9.noarch 57/102 2026-03-10T06:46:26.271 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-toml-0.10.2-6.el9.noarch 57/102 2026-03-10T06:46:26.272 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-pyasn1-0.4.8-7.el9.noarch 65/102 2026-03-10T06:46:26.273 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-pytz-2021.1-5.el9.noarch 58/102 2026-03-10T06:46:26.274 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pytz-2021.1-5.el9.noarch 58/102 2026-03-10T06:46:26.277 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-logutils-0.3.5-21.el9.noarch 66/102 2026-03-10T06:46:26.281 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-backports-tarfile-1.2.0-1.el9.noarch 59/102 2026-03-10T06:46:26.281 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-backports-tarfile-1.2.0-1.el9.noarch 59/102 2026-03-10T06:46:26.282 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-webob-1.8.8-2.el9.noarch 67/102 2026-03-10T06:46:26.286 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-devel-3.9.25-3.el9.x86_64 60/102 2026-03-10T06:46:26.286 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-devel-3.9.25-3.el9.x86_64 60/102 2026-03-10T06:46:26.288 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-cachetools-4.2.4-1.el9.noarch 68/102 2026-03-10T06:46:26.288 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-jsonpointer-2.0-4.el9.noarch 61/102 2026-03-10T06:46:26.288 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jsonpointer-2.0-4.el9.noarch 61/102 2026-03-10T06:46:26.291 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-typing-extensions-4.15.0-1.el9.noarch 62/102 2026-03-10T06:46:26.292 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-typing-extensions-4.15.0-1.el9.noarch 62/102 2026-03-10T06:46:26.293 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-idna-2.10-7.el9.1.noarch 63/102 2026-03-10T06:46:26.294 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-chardet-4.0.0-5.el9.noarch 69/102 2026-03-10T06:46:26.294 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-idna-2.10-7.el9.1.noarch 63/102 2026-03-10T06:46:26.298 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-autocommand-2.2.2-8.el9.noarch 70/102 2026-03-10T06:46:26.298 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-pysocks-1.7.1-12.el9.noarch 64/102 2026-03-10T06:46:26.300 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pysocks-1.7.1-12.el9.noarch 64/102 2026-03-10T06:46:26.301 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-packaging-20.9-5.el9.noarch 71/102 2026-03-10T06:46:26.302 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-pyasn1-0.4.8-7.el9.noarch 65/102 2026-03-10T06:46:26.304 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pyasn1-0.4.8-7.el9.noarch 65/102 2026-03-10T06:46:26.306 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : grpc-data-1.46.7-10.el9.noarch 72/102 2026-03-10T06:46:26.307 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-logutils-0.3.5-21.el9.noarch 66/102 2026-03-10T06:46:26.309 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-logutils-0.3.5-21.el9.noarch 66/102 2026-03-10T06:46:26.311 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-protobuf-3.14.0-17.el9.noarch 73/102 2026-03-10T06:46:26.311 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-webob-1.8.8-2.el9.noarch 67/102 2026-03-10T06:46:26.313 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-webob-1.8.8-2.el9.noarch 67/102 2026-03-10T06:46:26.315 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-zc-lockfile-2.0-10.el9.noarch 74/102 2026-03-10T06:46:26.317 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-cachetools-4.2.4-1.el9.noarch 68/102 2026-03-10T06:46:26.319 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-cachetools-4.2.4-1.el9.noarch 68/102 2026-03-10T06:46:26.320 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-chardet-4.0.0-5.el9.noarch 69/102 2026-03-10T06:46:26.322 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-chardet-4.0.0-5.el9.noarch 69/102 2026-03-10T06:46:26.323 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-natsort-7.1.1-5.el9.noarch 75/102 2026-03-10T06:46:26.324 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-autocommand-2.2.2-8.el9.noarch 70/102 2026-03-10T06:46:26.326 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-autocommand-2.2.2-8.el9.noarch 70/102 2026-03-10T06:46:26.327 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-packaging-20.9-5.el9.noarch 71/102 2026-03-10T06:46:26.328 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-oauthlib-3.1.1-5.el9.noarch 76/102 2026-03-10T06:46:26.329 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-packaging-20.9-5.el9.noarch 71/102 2026-03-10T06:46:26.332 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-websocket-client-1.2.3-2.el9.noarch 77/102 2026-03-10T06:46:26.332 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : grpc-data-1.46.7-10.el9.noarch 72/102 2026-03-10T06:46:26.334 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : grpc-data-1.46.7-10.el9.noarch 72/102 2026-03-10T06:46:26.335 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-certifi-2023.05.07-4.el9.noarch 78/102 2026-03-10T06:46:26.336 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-protobuf-3.14.0-17.el9.noarch 73/102 2026-03-10T06:46:26.337 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 79/102 2026-03-10T06:46:26.338 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-protobuf-3.14.0-17.el9.noarch 73/102 2026-03-10T06:46:26.339 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-zc-lockfile-2.0-10.el9.noarch 74/102 2026-03-10T06:46:26.342 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-zc-lockfile-2.0-10.el9.noarch 74/102 2026-03-10T06:46:26.343 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 80/102 2026-03-10T06:46:26.347 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-werkzeug-2.0.3-3.el9.1.noarch 81/102 2026-03-10T06:46:26.347 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-natsort-7.1.1-5.el9.noarch 75/102 2026-03-10T06:46:26.350 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-natsort-7.1.1-5.el9.noarch 75/102 2026-03-10T06:46:26.353 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-oauthlib-3.1.1-5.el9.noarch 76/102 2026-03-10T06:46:26.356 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-oauthlib-3.1.1-5.el9.noarch 76/102 2026-03-10T06:46:26.356 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-websocket-client-1.2.3-2.el9.noarch 77/102 2026-03-10T06:46:26.359 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-certifi-2023.05.07-4.el9.noarch 78/102 2026-03-10T06:46:26.360 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-websocket-client-1.2.3-2.el9.noarch 77/102 2026-03-10T06:46:26.360 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 79/102 2026-03-10T06:46:26.362 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-certifi-2023.05.07-4.el9.noarch 78/102 2026-03-10T06:46:26.364 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 79/102 2026-03-10T06:46:26.366 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 80/102 2026-03-10T06:46:26.367 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-10T06:46:26.367 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-crash.service". 2026-03-10T06:46:26.367 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:46:26.369 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 80/102 2026-03-10T06:46:26.370 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-werkzeug-2.0.3-3.el9.1.noarch 81/102 2026-03-10T06:46:26.373 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-werkzeug-2.0.3-3.el9.1.noarch 81/102 2026-03-10T06:46:26.376 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-10T06:46:26.390 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-10T06:46:26.390 INFO:teuthology.orchestra.run.vm01.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-crash.service". 2026-03-10T06:46:26.390 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:46:26.394 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-10T06:46:26.394 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-crash.service". 2026-03-10T06:46:26.394 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:46:26.397 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-10T06:46:26.401 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-10T06:46:26.406 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-10T06:46:26.406 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 83/102 2026-03-10T06:46:26.419 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 83/102 2026-03-10T06:46:26.424 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : qatzip-libs-1.3.1-1.el9.x86_64 84/102 2026-03-10T06:46:26.427 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-10T06:46:26.427 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 85/102 2026-03-10T06:46:26.427 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 83/102 2026-03-10T06:46:26.429 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-prettytable-0.7.2-27.el9.noarch 86/102 2026-03-10T06:46:26.429 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 87/102 2026-03-10T06:46:26.430 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-10T06:46:26.431 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 83/102 2026-03-10T06:46:26.440 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 83/102 2026-03-10T06:46:26.442 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 83/102 2026-03-10T06:46:26.445 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : qatzip-libs-1.3.1-1.el9.x86_64 84/102 2026-03-10T06:46:26.448 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : qatzip-libs-1.3.1-1.el9.x86_64 84/102 2026-03-10T06:46:26.449 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 85/102 2026-03-10T06:46:26.451 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-prettytable-0.7.2-27.el9.noarch 86/102 2026-03-10T06:46:26.451 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 87/102 2026-03-10T06:46:26.451 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 85/102 2026-03-10T06:46:26.453 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-prettytable-0.7.2-27.el9.noarch 86/102 2026-03-10T06:46:26.453 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 87/102 2026-03-10T06:46:32.159 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 87/102 2026-03-10T06:46:32.160 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /sys 2026-03-10T06:46:32.160 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /proc 2026-03-10T06:46:32.160 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /mnt 2026-03-10T06:46:32.160 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /var/tmp 2026-03-10T06:46:32.160 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /home 2026-03-10T06:46:32.160 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /root 2026-03-10T06:46:32.160 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /tmp 2026-03-10T06:46:32.160 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:46:32.168 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : qatlib-25.08.0-2.el9.x86_64 88/102 2026-03-10T06:46:32.187 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-10T06:46:32.187 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-10T06:46:32.196 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-10T06:46:32.199 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : gperftools-libs-2.9.1-3.el9.x86_64 90/102 2026-03-10T06:46:32.201 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libunwind-1.6.2-1.el9.x86_64 91/102 2026-03-10T06:46:32.203 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : pciutils-3.7.0-7.el9.x86_64 92/102 2026-03-10T06:46:32.205 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : liboath-2.6.12-1.el9.x86_64 93/102 2026-03-10T06:46:32.206 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 94/102 2026-03-10T06:46:32.219 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 94/102 2026-03-10T06:46:32.221 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ledmon-libs-1.1.0-3.el9.x86_64 95/102 2026-03-10T06:46:32.224 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libquadmath-11.5.0-14.el9.x86_64 96/102 2026-03-10T06:46:32.227 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-markupsafe-1.1.1-12.el9.x86_64 97/102 2026-03-10T06:46:32.230 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : protobuf-3.14.0-17.el9.x86_64 98/102 2026-03-10T06:46:32.235 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libconfig-1.7.2-9.el9.x86_64 99/102 2026-03-10T06:46:32.243 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : cryptsetup-2.8.1-3.el9.x86_64 100/102 2026-03-10T06:46:32.249 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : abseil-cpp-20211102.0-4.el9.x86_64 101/102 2026-03-10T06:46:32.249 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-10T06:46:32.280 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 87/102 2026-03-10T06:46:32.281 INFO:teuthology.orchestra.run.vm01.stdout:skipping the directory /sys 2026-03-10T06:46:32.281 INFO:teuthology.orchestra.run.vm01.stdout:skipping the directory /proc 2026-03-10T06:46:32.281 INFO:teuthology.orchestra.run.vm01.stdout:skipping the directory /mnt 2026-03-10T06:46:32.281 INFO:teuthology.orchestra.run.vm01.stdout:skipping the directory /var/tmp 2026-03-10T06:46:32.281 INFO:teuthology.orchestra.run.vm01.stdout:skipping the directory /home 2026-03-10T06:46:32.281 INFO:teuthology.orchestra.run.vm01.stdout:skipping the directory /root 2026-03-10T06:46:32.281 INFO:teuthology.orchestra.run.vm01.stdout:skipping the directory /tmp 2026-03-10T06:46:32.281 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:46:32.284 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 87/102 2026-03-10T06:46:32.284 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /sys 2026-03-10T06:46:32.284 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /proc 2026-03-10T06:46:32.284 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /mnt 2026-03-10T06:46:32.284 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /var/tmp 2026-03-10T06:46:32.284 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /home 2026-03-10T06:46:32.284 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /root 2026-03-10T06:46:32.284 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /tmp 2026-03-10T06:46:32.284 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:46:32.290 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : qatlib-25.08.0-2.el9.x86_64 88/102 2026-03-10T06:46:32.292 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : qatlib-25.08.0-2.el9.x86_64 88/102 2026-03-10T06:46:32.308 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-10T06:46:32.308 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-10T06:46:32.309 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-10T06:46:32.310 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-10T06:46:32.315 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-10T06:46:32.317 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : gperftools-libs-2.9.1-3.el9.x86_64 90/102 2026-03-10T06:46:32.319 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-10T06:46:32.320 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libunwind-1.6.2-1.el9.x86_64 91/102 2026-03-10T06:46:32.322 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : gperftools-libs-2.9.1-3.el9.x86_64 90/102 2026-03-10T06:46:32.322 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : pciutils-3.7.0-7.el9.x86_64 92/102 2026-03-10T06:46:32.325 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : liboath-2.6.12-1.el9.x86_64 93/102 2026-03-10T06:46:32.325 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 94/102 2026-03-10T06:46:32.325 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : libunwind-1.6.2-1.el9.x86_64 91/102 2026-03-10T06:46:32.328 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : pciutils-3.7.0-7.el9.x86_64 92/102 2026-03-10T06:46:32.330 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : liboath-2.6.12-1.el9.x86_64 93/102 2026-03-10T06:46:32.330 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 94/102 2026-03-10T06:46:32.336 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 94/102 2026-03-10T06:46:32.337 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ledmon-libs-1.1.0-3.el9.x86_64 95/102 2026-03-10T06:46:32.339 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libquadmath-11.5.0-14.el9.x86_64 96/102 2026-03-10T06:46:32.343 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-markupsafe-1.1.1-12.el9.x86_64 97/102 2026-03-10T06:46:32.345 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 94/102 2026-03-10T06:46:32.345 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : protobuf-3.14.0-17.el9.x86_64 98/102 2026-03-10T06:46:32.346 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-10T06:46:32.346 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 1/102 2026-03-10T06:46:32.346 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T06:46:32.346 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/102 2026-03-10T06:46:32.346 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 4/102 2026-03-10T06:46:32.346 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/102 2026-03-10T06:46:32.346 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 6/102 2026-03-10T06:46:32.346 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-10T06:46:32.346 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 8/102 2026-03-10T06:46:32.346 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 9/102 2026-03-10T06:46:32.346 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 10/102 2026-03-10T06:46:32.347 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 11/102 2026-03-10T06:46:32.347 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T06:46:32.347 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 13/102 2026-03-10T06:46:32.347 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 14/102 2026-03-10T06:46:32.347 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 15/102 2026-03-10T06:46:32.347 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 16/102 2026-03-10T06:46:32.347 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 17/102 2026-03-10T06:46:32.347 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 18/102 2026-03-10T06:46:32.347 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 19/102 2026-03-10T06:46:32.347 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 20/102 2026-03-10T06:46:32.347 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 21/102 2026-03-10T06:46:32.347 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 22/102 2026-03-10T06:46:32.347 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 23/102 2026-03-10T06:46:32.347 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 24/102 2026-03-10T06:46:32.347 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 25/102 2026-03-10T06:46:32.347 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 26/102 2026-03-10T06:46:32.347 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 27/102 2026-03-10T06:46:32.347 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 28/102 2026-03-10T06:46:32.347 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 29/102 2026-03-10T06:46:32.347 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 30/102 2026-03-10T06:46:32.347 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 31/102 2026-03-10T06:46:32.347 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 32/102 2026-03-10T06:46:32.347 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 33/102 2026-03-10T06:46:32.347 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 34/102 2026-03-10T06:46:32.347 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 35/102 2026-03-10T06:46:32.347 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 36/102 2026-03-10T06:46:32.347 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 37/102 2026-03-10T06:46:32.347 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 38/102 2026-03-10T06:46:32.348 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 39/102 2026-03-10T06:46:32.348 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 40/102 2026-03-10T06:46:32.348 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 41/102 2026-03-10T06:46:32.348 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 42/102 2026-03-10T06:46:32.348 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 43/102 2026-03-10T06:46:32.348 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/102 2026-03-10T06:46:32.348 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-chardet-4.0.0-5.el9.noarch 45/102 2026-03-10T06:46:32.348 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 46/102 2026-03-10T06:46:32.348 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 47/102 2026-03-10T06:46:32.348 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 48/102 2026-03-10T06:46:32.348 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 49/102 2026-03-10T06:46:32.348 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 50/102 2026-03-10T06:46:32.348 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 51/102 2026-03-10T06:46:32.348 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 52/102 2026-03-10T06:46:32.348 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-idna-2.10-7.el9.1.noarch 53/102 2026-03-10T06:46:32.348 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 54/102 2026-03-10T06:46:32.348 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ledmon-libs-1.1.0-3.el9.x86_64 95/102 2026-03-10T06:46:32.349 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 55/102 2026-03-10T06:46:32.349 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 56/102 2026-03-10T06:46:32.349 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 57/102 2026-03-10T06:46:32.349 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 58/102 2026-03-10T06:46:32.349 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 59/102 2026-03-10T06:46:32.349 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 60/102 2026-03-10T06:46:32.349 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jsonpatch-1.21-16.el9.noarch 61/102 2026-03-10T06:46:32.349 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jsonpointer-2.0-4.el9.noarch 62/102 2026-03-10T06:46:32.349 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 63/102 2026-03-10T06:46:32.349 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 64/102 2026-03-10T06:46:32.349 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 65/102 2026-03-10T06:46:32.349 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 66/102 2026-03-10T06:46:32.349 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 67/102 2026-03-10T06:46:32.349 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 68/102 2026-03-10T06:46:32.349 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 69/102 2026-03-10T06:46:32.349 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 70/102 2026-03-10T06:46:32.349 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 71/102 2026-03-10T06:46:32.349 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-oauthlib-3.1.1-5.el9.noarch 72/102 2026-03-10T06:46:32.349 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 73/102 2026-03-10T06:46:32.349 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 74/102 2026-03-10T06:46:32.349 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-ply-3.11-14.el9.noarch 75/102 2026-03-10T06:46:32.349 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 76/102 2026-03-10T06:46:32.349 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-prettytable-0.7.2-27.el9.noarch 77/102 2026-03-10T06:46:32.349 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 78/102 2026-03-10T06:46:32.349 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 79/102 2026-03-10T06:46:32.349 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 80/102 2026-03-10T06:46:32.349 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 81/102 2026-03-10T06:46:32.349 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 82/102 2026-03-10T06:46:32.349 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pysocks-1.7.1-12.el9.noarch 83/102 2026-03-10T06:46:32.349 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pytz-2021.1-5.el9.noarch 84/102 2026-03-10T06:46:32.349 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 85/102 2026-03-10T06:46:32.349 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 86/102 2026-03-10T06:46:32.349 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 87/102 2026-03-10T06:46:32.349 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 88/102 2026-03-10T06:46:32.349 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 89/102 2026-03-10T06:46:32.349 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 90/102 2026-03-10T06:46:32.349 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 91/102 2026-03-10T06:46:32.349 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 92/102 2026-03-10T06:46:32.349 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 93/102 2026-03-10T06:46:32.349 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 94/102 2026-03-10T06:46:32.350 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 95/102 2026-03-10T06:46:32.350 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 96/102 2026-03-10T06:46:32.350 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 97/102 2026-03-10T06:46:32.350 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 98/102 2026-03-10T06:46:32.350 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 99/102 2026-03-10T06:46:32.350 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 100/102 2026-03-10T06:46:32.350 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 101/102 2026-03-10T06:46:32.350 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : libquadmath-11.5.0-14.el9.x86_64 96/102 2026-03-10T06:46:32.350 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libconfig-1.7.2-9.el9.x86_64 99/102 2026-03-10T06:46:32.352 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-markupsafe-1.1.1-12.el9.x86_64 97/102 2026-03-10T06:46:32.355 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : protobuf-3.14.0-17.el9.x86_64 98/102 2026-03-10T06:46:32.358 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : cryptsetup-2.8.1-3.el9.x86_64 100/102 2026-03-10T06:46:32.361 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : libconfig-1.7.2-9.el9.x86_64 99/102 2026-03-10T06:46:32.363 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : abseil-cpp-20211102.0-4.el9.x86_64 101/102 2026-03-10T06:46:32.363 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-10T06:46:32.369 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : cryptsetup-2.8.1-3.el9.x86_64 100/102 2026-03-10T06:46:32.374 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : abseil-cpp-20211102.0-4.el9.x86_64 101/102 2026-03-10T06:46:32.374 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-10T06:46:32.438 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-10T06:46:32.438 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:46:32.438 INFO:teuthology.orchestra.run.vm08.stdout:Removed: 2026-03-10T06:46:32.438 INFO:teuthology.orchestra.run.vm08.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-10T06:46:32.438 INFO:teuthology.orchestra.run.vm08.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:32.438 INFO:teuthology.orchestra.run.vm08.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:32.438 INFO:teuthology.orchestra.run.vm08.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:46:32.438 INFO:teuthology.orchestra.run.vm08.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:32.438 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:32.438 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:46:32.438 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:46:32.438 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:46:32.438 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:46:32.438 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:46:32.438 INFO:teuthology.orchestra.run.vm08.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:32.438 INFO:teuthology.orchestra.run.vm08.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:46:32.438 INFO:teuthology.orchestra.run.vm08.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:32.438 INFO:teuthology.orchestra.run.vm08.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:46:32.438 INFO:teuthology.orchestra.run.vm08.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-10T06:46:32.438 INFO:teuthology.orchestra.run.vm08.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-10T06:46:32.438 INFO:teuthology.orchestra.run.vm08.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-10T06:46:32.438 INFO:teuthology.orchestra.run.vm08.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-10T06:46:32.438 INFO:teuthology.orchestra.run.vm08.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-10T06:46:32.438 INFO:teuthology.orchestra.run.vm08.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-10T06:46:32.438 INFO:teuthology.orchestra.run.vm08.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-10T06:46:32.438 INFO:teuthology.orchestra.run.vm08.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:32.438 INFO:teuthology.orchestra.run.vm08.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-10T06:46:32.438 INFO:teuthology.orchestra.run.vm08.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-10T06:46:32.438 INFO:teuthology.orchestra.run.vm08.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-10T06:46:32.438 INFO:teuthology.orchestra.run.vm08.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-10T06:46:32.438 INFO:teuthology.orchestra.run.vm08.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:32.438 INFO:teuthology.orchestra.run.vm08.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T06:46:32.438 INFO:teuthology.orchestra.run.vm08.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-10T06:46:32.438 INFO:teuthology.orchestra.run.vm08.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-10T06:46:32.438 INFO:teuthology.orchestra.run.vm08.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-10T06:46:32.438 INFO:teuthology.orchestra.run.vm08.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-10T06:46:32.438 INFO:teuthology.orchestra.run.vm08.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-10T06:46:32.438 INFO:teuthology.orchestra.run.vm08.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-10T06:46:32.438 INFO:teuthology.orchestra.run.vm08.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-10T06:46:32.438 INFO:teuthology.orchestra.run.vm08.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-chardet-4.0.0-5.el9.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-idna-2.10-7.el9.1.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-jsonpatch-1.21-16.el9.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-jsonpointer-2.0-4.el9.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-oauthlib-3.1.1-5.el9.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-ply-3.11-14.el9.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-prettytable-0.7.2-27.el9.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-pysocks-1.7.1-12.el9.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-pytz-2021.1-5.el9.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-10T06:46:32.439 INFO:teuthology.orchestra.run.vm08.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-10T06:46:32.440 INFO:teuthology.orchestra.run.vm08.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-10T06:46:32.440 INFO:teuthology.orchestra.run.vm08.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-10T06:46:32.440 INFO:teuthology.orchestra.run.vm08.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:32.440 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:46:32.440 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:46:32.465 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-10T06:46:32.465 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 1/102 2026-03-10T06:46:32.465 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T06:46:32.465 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/102 2026-03-10T06:46:32.465 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 4/102 2026-03-10T06:46:32.465 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/102 2026-03-10T06:46:32.465 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 6/102 2026-03-10T06:46:32.465 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-10T06:46:32.465 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 8/102 2026-03-10T06:46:32.465 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 9/102 2026-03-10T06:46:32.465 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 10/102 2026-03-10T06:46:32.466 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 11/102 2026-03-10T06:46:32.466 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T06:46:32.466 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 13/102 2026-03-10T06:46:32.466 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 14/102 2026-03-10T06:46:32.466 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 15/102 2026-03-10T06:46:32.466 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 16/102 2026-03-10T06:46:32.466 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 17/102 2026-03-10T06:46:32.466 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 18/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 19/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 20/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 21/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 22/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 23/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 24/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 25/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 26/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 27/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 28/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 29/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 30/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 31/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 32/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 33/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 34/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 35/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 36/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 37/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 38/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 39/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 40/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 41/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 42/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 43/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-chardet-4.0.0-5.el9.noarch 45/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 46/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 47/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 48/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 49/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 50/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 51/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 52/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-idna-2.10-7.el9.1.noarch 53/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 54/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 55/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 56/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 57/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 58/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 59/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 60/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jsonpatch-1.21-16.el9.noarch 61/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jsonpointer-2.0-4.el9.noarch 62/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 63/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 64/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 65/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 66/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 67/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 68/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 69/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 70/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 71/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-oauthlib-3.1.1-5.el9.noarch 72/102 2026-03-10T06:46:32.467 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 73/102 2026-03-10T06:46:32.468 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 74/102 2026-03-10T06:46:32.468 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-ply-3.11-14.el9.noarch 75/102 2026-03-10T06:46:32.468 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 76/102 2026-03-10T06:46:32.468 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-prettytable-0.7.2-27.el9.noarch 77/102 2026-03-10T06:46:32.468 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 78/102 2026-03-10T06:46:32.468 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 79/102 2026-03-10T06:46:32.468 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 80/102 2026-03-10T06:46:32.468 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 81/102 2026-03-10T06:46:32.468 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 82/102 2026-03-10T06:46:32.468 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pysocks-1.7.1-12.el9.noarch 83/102 2026-03-10T06:46:32.468 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pytz-2021.1-5.el9.noarch 84/102 2026-03-10T06:46:32.468 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 85/102 2026-03-10T06:46:32.468 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 86/102 2026-03-10T06:46:32.468 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 87/102 2026-03-10T06:46:32.468 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 88/102 2026-03-10T06:46:32.468 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 89/102 2026-03-10T06:46:32.468 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 90/102 2026-03-10T06:46:32.468 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 91/102 2026-03-10T06:46:32.468 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 92/102 2026-03-10T06:46:32.469 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 93/102 2026-03-10T06:46:32.469 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 94/102 2026-03-10T06:46:32.469 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 95/102 2026-03-10T06:46:32.469 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 96/102 2026-03-10T06:46:32.469 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 97/102 2026-03-10T06:46:32.469 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 98/102 2026-03-10T06:46:32.469 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 99/102 2026-03-10T06:46:32.469 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 100/102 2026-03-10T06:46:32.469 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 101/102 2026-03-10T06:46:32.486 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-10T06:46:32.486 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 1/102 2026-03-10T06:46:32.486 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T06:46:32.486 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/102 2026-03-10T06:46:32.486 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 4/102 2026-03-10T06:46:32.486 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/102 2026-03-10T06:46:32.486 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 6/102 2026-03-10T06:46:32.486 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-10T06:46:32.486 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 8/102 2026-03-10T06:46:32.486 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 9/102 2026-03-10T06:46:32.486 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 10/102 2026-03-10T06:46:32.486 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 11/102 2026-03-10T06:46:32.486 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T06:46:32.486 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 13/102 2026-03-10T06:46:32.487 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 14/102 2026-03-10T06:46:32.487 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 15/102 2026-03-10T06:46:32.487 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 16/102 2026-03-10T06:46:32.487 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 17/102 2026-03-10T06:46:32.487 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 18/102 2026-03-10T06:46:32.487 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 19/102 2026-03-10T06:46:32.487 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 20/102 2026-03-10T06:46:32.487 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 21/102 2026-03-10T06:46:32.487 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 22/102 2026-03-10T06:46:32.487 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 23/102 2026-03-10T06:46:32.487 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 24/102 2026-03-10T06:46:32.487 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 25/102 2026-03-10T06:46:32.487 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 26/102 2026-03-10T06:46:32.487 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 27/102 2026-03-10T06:46:32.487 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 28/102 2026-03-10T06:46:32.487 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 29/102 2026-03-10T06:46:32.487 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 30/102 2026-03-10T06:46:32.487 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 31/102 2026-03-10T06:46:32.487 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 32/102 2026-03-10T06:46:32.487 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 33/102 2026-03-10T06:46:32.487 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 34/102 2026-03-10T06:46:32.487 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 35/102 2026-03-10T06:46:32.487 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 36/102 2026-03-10T06:46:32.487 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 37/102 2026-03-10T06:46:32.487 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 38/102 2026-03-10T06:46:32.487 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 39/102 2026-03-10T06:46:32.487 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 40/102 2026-03-10T06:46:32.487 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 41/102 2026-03-10T06:46:32.487 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 42/102 2026-03-10T06:46:32.487 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 43/102 2026-03-10T06:46:32.487 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/102 2026-03-10T06:46:32.487 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-chardet-4.0.0-5.el9.noarch 45/102 2026-03-10T06:46:32.487 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 46/102 2026-03-10T06:46:32.487 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 47/102 2026-03-10T06:46:32.487 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 48/102 2026-03-10T06:46:32.487 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 49/102 2026-03-10T06:46:32.487 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 50/102 2026-03-10T06:46:32.487 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 51/102 2026-03-10T06:46:32.487 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 52/102 2026-03-10T06:46:32.487 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-idna-2.10-7.el9.1.noarch 53/102 2026-03-10T06:46:32.487 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 54/102 2026-03-10T06:46:32.487 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 55/102 2026-03-10T06:46:32.487 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 56/102 2026-03-10T06:46:32.488 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 57/102 2026-03-10T06:46:32.488 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 58/102 2026-03-10T06:46:32.488 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 59/102 2026-03-10T06:46:32.488 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 60/102 2026-03-10T06:46:32.488 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jsonpatch-1.21-16.el9.noarch 61/102 2026-03-10T06:46:32.488 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jsonpointer-2.0-4.el9.noarch 62/102 2026-03-10T06:46:32.488 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 63/102 2026-03-10T06:46:32.488 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 64/102 2026-03-10T06:46:32.488 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 65/102 2026-03-10T06:46:32.488 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 66/102 2026-03-10T06:46:32.488 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 67/102 2026-03-10T06:46:32.488 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 68/102 2026-03-10T06:46:32.488 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 69/102 2026-03-10T06:46:32.488 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 70/102 2026-03-10T06:46:32.488 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 71/102 2026-03-10T06:46:32.488 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-oauthlib-3.1.1-5.el9.noarch 72/102 2026-03-10T06:46:32.488 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 73/102 2026-03-10T06:46:32.488 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 74/102 2026-03-10T06:46:32.488 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-ply-3.11-14.el9.noarch 75/102 2026-03-10T06:46:32.488 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 76/102 2026-03-10T06:46:32.488 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-prettytable-0.7.2-27.el9.noarch 77/102 2026-03-10T06:46:32.488 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 78/102 2026-03-10T06:46:32.488 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 79/102 2026-03-10T06:46:32.488 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 80/102 2026-03-10T06:46:32.488 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 81/102 2026-03-10T06:46:32.488 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 82/102 2026-03-10T06:46:32.488 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-pysocks-1.7.1-12.el9.noarch 83/102 2026-03-10T06:46:32.488 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-pytz-2021.1-5.el9.noarch 84/102 2026-03-10T06:46:32.488 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 85/102 2026-03-10T06:46:32.488 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 86/102 2026-03-10T06:46:32.488 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 87/102 2026-03-10T06:46:32.488 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 88/102 2026-03-10T06:46:32.488 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 89/102 2026-03-10T06:46:32.488 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 90/102 2026-03-10T06:46:32.488 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 91/102 2026-03-10T06:46:32.488 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 92/102 2026-03-10T06:46:32.488 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 93/102 2026-03-10T06:46:32.488 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 94/102 2026-03-10T06:46:32.488 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 95/102 2026-03-10T06:46:32.488 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 96/102 2026-03-10T06:46:32.488 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 97/102 2026-03-10T06:46:32.488 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 98/102 2026-03-10T06:46:32.488 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 99/102 2026-03-10T06:46:32.488 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 100/102 2026-03-10T06:46:32.489 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 101/102 2026-03-10T06:46:32.557 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-10T06:46:32.557 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:46:32.557 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-10T06:46:32.557 INFO:teuthology.orchestra.run.vm09.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-10T06:46:32.557 INFO:teuthology.orchestra.run.vm09.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:32.557 INFO:teuthology.orchestra.run.vm09.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:32.557 INFO:teuthology.orchestra.run.vm09.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:46:32.557 INFO:teuthology.orchestra.run.vm09.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:32.557 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:32.557 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:46:32.557 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:46:32.557 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:46:32.557 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:46:32.557 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:46:32.557 INFO:teuthology.orchestra.run.vm09.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:32.557 INFO:teuthology.orchestra.run.vm09.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:46:32.557 INFO:teuthology.orchestra.run.vm09.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:32.557 INFO:teuthology.orchestra.run.vm09.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:46:32.557 INFO:teuthology.orchestra.run.vm09.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-10T06:46:32.558 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-10T06:46:32.558 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-10T06:46:32.558 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-10T06:46:32.558 INFO:teuthology.orchestra.run.vm09.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-10T06:46:32.558 INFO:teuthology.orchestra.run.vm09.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-10T06:46:32.558 INFO:teuthology.orchestra.run.vm09.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-10T06:46:32.558 INFO:teuthology.orchestra.run.vm09.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:32.558 INFO:teuthology.orchestra.run.vm09.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-10T06:46:32.558 INFO:teuthology.orchestra.run.vm09.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-10T06:46:32.558 INFO:teuthology.orchestra.run.vm09.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-10T06:46:32.558 INFO:teuthology.orchestra.run.vm09.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-10T06:46:32.558 INFO:teuthology.orchestra.run.vm09.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:32.558 INFO:teuthology.orchestra.run.vm09.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T06:46:32.558 INFO:teuthology.orchestra.run.vm09.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-10T06:46:32.558 INFO:teuthology.orchestra.run.vm09.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-10T06:46:32.558 INFO:teuthology.orchestra.run.vm09.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-10T06:46:32.558 INFO:teuthology.orchestra.run.vm09.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-10T06:46:32.558 INFO:teuthology.orchestra.run.vm09.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-10T06:46:32.558 INFO:teuthology.orchestra.run.vm09.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-10T06:46:32.558 INFO:teuthology.orchestra.run.vm09.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-10T06:46:32.558 INFO:teuthology.orchestra.run.vm09.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-10T06:46:32.558 INFO:teuthology.orchestra.run.vm09.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-10T06:46:32.558 INFO:teuthology.orchestra.run.vm09.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-10T06:46:32.558 INFO:teuthology.orchestra.run.vm09.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-10T06:46:32.558 INFO:teuthology.orchestra.run.vm09.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-10T06:46:32.558 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:32.558 INFO:teuthology.orchestra.run.vm09.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-10T06:46:32.558 INFO:teuthology.orchestra.run.vm09.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-10T06:46:32.558 INFO:teuthology.orchestra.run.vm09.stdout: python3-chardet-4.0.0-5.el9.noarch 2026-03-10T06:46:32.558 INFO:teuthology.orchestra.run.vm09.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-10T06:46:32.558 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-10T06:46:32.558 INFO:teuthology.orchestra.run.vm09.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-10T06:46:32.558 INFO:teuthology.orchestra.run.vm09.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-10T06:46:32.558 INFO:teuthology.orchestra.run.vm09.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-10T06:46:32.558 INFO:teuthology.orchestra.run.vm09.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-10T06:46:32.558 INFO:teuthology.orchestra.run.vm09.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-10T06:46:32.558 INFO:teuthology.orchestra.run.vm09.stdout: python3-idna-2.10-7.el9.1.noarch 2026-03-10T06:46:32.559 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-10T06:46:32.559 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-10T06:46:32.559 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-10T06:46:32.559 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-10T06:46:32.559 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-10T06:46:32.559 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-10T06:46:32.559 INFO:teuthology.orchestra.run.vm09.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-10T06:46:32.559 INFO:teuthology.orchestra.run.vm09.stdout: python3-jsonpatch-1.21-16.el9.noarch 2026-03-10T06:46:32.559 INFO:teuthology.orchestra.run.vm09.stdout: python3-jsonpointer-2.0-4.el9.noarch 2026-03-10T06:46:32.559 INFO:teuthology.orchestra.run.vm09.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-10T06:46:32.559 INFO:teuthology.orchestra.run.vm09.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T06:46:32.559 INFO:teuthology.orchestra.run.vm09.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-10T06:46:32.559 INFO:teuthology.orchestra.run.vm09.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-10T06:46:32.559 INFO:teuthology.orchestra.run.vm09.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-10T06:46:32.559 INFO:teuthology.orchestra.run.vm09.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-10T06:46:32.559 INFO:teuthology.orchestra.run.vm09.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-10T06:46:32.559 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-10T06:46:32.559 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-10T06:46:32.559 INFO:teuthology.orchestra.run.vm09.stdout: python3-oauthlib-3.1.1-5.el9.noarch 2026-03-10T06:46:32.559 INFO:teuthology.orchestra.run.vm09.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-10T06:46:32.559 INFO:teuthology.orchestra.run.vm09.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-10T06:46:32.559 INFO:teuthology.orchestra.run.vm09.stdout: python3-ply-3.11-14.el9.noarch 2026-03-10T06:46:32.559 INFO:teuthology.orchestra.run.vm09.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-10T06:46:32.559 INFO:teuthology.orchestra.run.vm09.stdout: python3-prettytable-0.7.2-27.el9.noarch 2026-03-10T06:46:32.559 INFO:teuthology.orchestra.run.vm09.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-10T06:46:32.559 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-10T06:46:32.559 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-10T06:46:32.559 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-10T06:46:32.559 INFO:teuthology.orchestra.run.vm09.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-10T06:46:32.559 INFO:teuthology.orchestra.run.vm09.stdout: python3-pysocks-1.7.1-12.el9.noarch 2026-03-10T06:46:32.559 INFO:teuthology.orchestra.run.vm09.stdout: python3-pytz-2021.1-5.el9.noarch 2026-03-10T06:46:32.559 INFO:teuthology.orchestra.run.vm09.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-10T06:46:32.559 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-10T06:46:32.559 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-10T06:46:32.559 INFO:teuthology.orchestra.run.vm09.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-10T06:46:32.559 INFO:teuthology.orchestra.run.vm09.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-10T06:46:32.559 INFO:teuthology.orchestra.run.vm09.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-10T06:46:32.559 INFO:teuthology.orchestra.run.vm09.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-10T06:46:32.559 INFO:teuthology.orchestra.run.vm09.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-10T06:46:32.559 INFO:teuthology.orchestra.run.vm09.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-10T06:46:32.559 INFO:teuthology.orchestra.run.vm09.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-10T06:46:32.559 INFO:teuthology.orchestra.run.vm09.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-10T06:46:32.559 INFO:teuthology.orchestra.run.vm09.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-10T06:46:32.559 INFO:teuthology.orchestra.run.vm09.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-10T06:46:32.560 INFO:teuthology.orchestra.run.vm09.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-10T06:46:32.560 INFO:teuthology.orchestra.run.vm09.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-10T06:46:32.560 INFO:teuthology.orchestra.run.vm09.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-10T06:46:32.560 INFO:teuthology.orchestra.run.vm09.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-10T06:46:32.560 INFO:teuthology.orchestra.run.vm09.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:32.560 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:46:32.560 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T06:46:32.566 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-10T06:46:32.566 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:46:32.566 INFO:teuthology.orchestra.run.vm01.stdout:Removed: 2026-03-10T06:46:32.566 INFO:teuthology.orchestra.run.vm01.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-10T06:46:32.566 INFO:teuthology.orchestra.run.vm01.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:32.567 INFO:teuthology.orchestra.run.vm01.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:32.567 INFO:teuthology.orchestra.run.vm01.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:46:32.567 INFO:teuthology.orchestra.run.vm01.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:32.567 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:32.567 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:46:32.567 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:46:32.567 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:46:32.567 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:46:32.567 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:46:32.567 INFO:teuthology.orchestra.run.vm01.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:32.567 INFO:teuthology.orchestra.run.vm01.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:46:32.567 INFO:teuthology.orchestra.run.vm01.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:32.567 INFO:teuthology.orchestra.run.vm01.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:46:32.567 INFO:teuthology.orchestra.run.vm01.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-10T06:46:32.567 INFO:teuthology.orchestra.run.vm01.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-10T06:46:32.567 INFO:teuthology.orchestra.run.vm01.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-10T06:46:32.567 INFO:teuthology.orchestra.run.vm01.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-10T06:46:32.567 INFO:teuthology.orchestra.run.vm01.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-10T06:46:32.567 INFO:teuthology.orchestra.run.vm01.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-10T06:46:32.567 INFO:teuthology.orchestra.run.vm01.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-10T06:46:32.567 INFO:teuthology.orchestra.run.vm01.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:32.567 INFO:teuthology.orchestra.run.vm01.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-10T06:46:32.567 INFO:teuthology.orchestra.run.vm01.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-10T06:46:32.567 INFO:teuthology.orchestra.run.vm01.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-10T06:46:32.567 INFO:teuthology.orchestra.run.vm01.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-10T06:46:32.567 INFO:teuthology.orchestra.run.vm01.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:32.567 INFO:teuthology.orchestra.run.vm01.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T06:46:32.567 INFO:teuthology.orchestra.run.vm01.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-10T06:46:32.567 INFO:teuthology.orchestra.run.vm01.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-10T06:46:32.567 INFO:teuthology.orchestra.run.vm01.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-10T06:46:32.567 INFO:teuthology.orchestra.run.vm01.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-10T06:46:32.567 INFO:teuthology.orchestra.run.vm01.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-10T06:46:32.567 INFO:teuthology.orchestra.run.vm01.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-10T06:46:32.567 INFO:teuthology.orchestra.run.vm01.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-10T06:46:32.567 INFO:teuthology.orchestra.run.vm01.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-10T06:46:32.567 INFO:teuthology.orchestra.run.vm01.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-10T06:46:32.567 INFO:teuthology.orchestra.run.vm01.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-10T06:46:32.567 INFO:teuthology.orchestra.run.vm01.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-10T06:46:32.567 INFO:teuthology.orchestra.run.vm01.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-10T06:46:32.567 INFO:teuthology.orchestra.run.vm01.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:32.567 INFO:teuthology.orchestra.run.vm01.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-10T06:46:32.567 INFO:teuthology.orchestra.run.vm01.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-10T06:46:32.567 INFO:teuthology.orchestra.run.vm01.stdout: python3-chardet-4.0.0-5.el9.noarch 2026-03-10T06:46:32.567 INFO:teuthology.orchestra.run.vm01.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-10T06:46:32.567 INFO:teuthology.orchestra.run.vm01.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-10T06:46:32.567 INFO:teuthology.orchestra.run.vm01.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-10T06:46:32.567 INFO:teuthology.orchestra.run.vm01.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-10T06:46:32.568 INFO:teuthology.orchestra.run.vm01.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-10T06:46:32.568 INFO:teuthology.orchestra.run.vm01.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-10T06:46:32.568 INFO:teuthology.orchestra.run.vm01.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-10T06:46:32.568 INFO:teuthology.orchestra.run.vm01.stdout: python3-idna-2.10-7.el9.1.noarch 2026-03-10T06:46:32.568 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-10T06:46:32.568 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-10T06:46:32.568 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-10T06:46:32.568 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-10T06:46:32.568 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-10T06:46:32.568 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-10T06:46:32.568 INFO:teuthology.orchestra.run.vm01.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-10T06:46:32.568 INFO:teuthology.orchestra.run.vm01.stdout: python3-jsonpatch-1.21-16.el9.noarch 2026-03-10T06:46:32.568 INFO:teuthology.orchestra.run.vm01.stdout: python3-jsonpointer-2.0-4.el9.noarch 2026-03-10T06:46:32.568 INFO:teuthology.orchestra.run.vm01.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-10T06:46:32.568 INFO:teuthology.orchestra.run.vm01.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T06:46:32.568 INFO:teuthology.orchestra.run.vm01.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-10T06:46:32.568 INFO:teuthology.orchestra.run.vm01.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-10T06:46:32.568 INFO:teuthology.orchestra.run.vm01.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-10T06:46:32.568 INFO:teuthology.orchestra.run.vm01.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-10T06:46:32.568 INFO:teuthology.orchestra.run.vm01.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-10T06:46:32.568 INFO:teuthology.orchestra.run.vm01.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-10T06:46:32.568 INFO:teuthology.orchestra.run.vm01.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-10T06:46:32.568 INFO:teuthology.orchestra.run.vm01.stdout: python3-oauthlib-3.1.1-5.el9.noarch 2026-03-10T06:46:32.568 INFO:teuthology.orchestra.run.vm01.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-10T06:46:32.568 INFO:teuthology.orchestra.run.vm01.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-10T06:46:32.568 INFO:teuthology.orchestra.run.vm01.stdout: python3-ply-3.11-14.el9.noarch 2026-03-10T06:46:32.568 INFO:teuthology.orchestra.run.vm01.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-10T06:46:32.568 INFO:teuthology.orchestra.run.vm01.stdout: python3-prettytable-0.7.2-27.el9.noarch 2026-03-10T06:46:32.568 INFO:teuthology.orchestra.run.vm01.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-10T06:46:32.568 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-10T06:46:32.568 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-10T06:46:32.568 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-10T06:46:32.568 INFO:teuthology.orchestra.run.vm01.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-10T06:46:32.568 INFO:teuthology.orchestra.run.vm01.stdout: python3-pysocks-1.7.1-12.el9.noarch 2026-03-10T06:46:32.568 INFO:teuthology.orchestra.run.vm01.stdout: python3-pytz-2021.1-5.el9.noarch 2026-03-10T06:46:32.568 INFO:teuthology.orchestra.run.vm01.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-10T06:46:32.568 INFO:teuthology.orchestra.run.vm01.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-10T06:46:32.568 INFO:teuthology.orchestra.run.vm01.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-10T06:46:32.568 INFO:teuthology.orchestra.run.vm01.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-10T06:46:32.568 INFO:teuthology.orchestra.run.vm01.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-10T06:46:32.568 INFO:teuthology.orchestra.run.vm01.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-10T06:46:32.568 INFO:teuthology.orchestra.run.vm01.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-10T06:46:32.568 INFO:teuthology.orchestra.run.vm01.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-10T06:46:32.568 INFO:teuthology.orchestra.run.vm01.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-10T06:46:32.568 INFO:teuthology.orchestra.run.vm01.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-10T06:46:32.568 INFO:teuthology.orchestra.run.vm01.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-10T06:46:32.568 INFO:teuthology.orchestra.run.vm01.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-10T06:46:32.568 INFO:teuthology.orchestra.run.vm01.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-10T06:46:32.569 INFO:teuthology.orchestra.run.vm01.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-10T06:46:32.569 INFO:teuthology.orchestra.run.vm01.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-10T06:46:32.569 INFO:teuthology.orchestra.run.vm01.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-10T06:46:32.569 INFO:teuthology.orchestra.run.vm01.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-10T06:46:32.569 INFO:teuthology.orchestra.run.vm01.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:32.569 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:46:32.569 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T06:46:32.659 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:46:32.659 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:46:32.659 INFO:teuthology.orchestra.run.vm08.stdout: Package Arch Version Repository Size 2026-03-10T06:46:32.659 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:46:32.659 INFO:teuthology.orchestra.run.vm08.stdout:Removing: 2026-03-10T06:46:32.659 INFO:teuthology.orchestra.run.vm08.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 775 k 2026-03-10T06:46:32.659 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:46:32.659 INFO:teuthology.orchestra.run.vm08.stdout:Transaction Summary 2026-03-10T06:46:32.659 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:46:32.659 INFO:teuthology.orchestra.run.vm08.stdout:Remove 1 Package 2026-03-10T06:46:32.659 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:46:32.659 INFO:teuthology.orchestra.run.vm08.stdout:Freed space: 775 k 2026-03-10T06:46:32.659 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction check 2026-03-10T06:46:32.661 INFO:teuthology.orchestra.run.vm08.stdout:Transaction check succeeded. 2026-03-10T06:46:32.661 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction test 2026-03-10T06:46:32.662 INFO:teuthology.orchestra.run.vm08.stdout:Transaction test succeeded. 2026-03-10T06:46:32.662 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction 2026-03-10T06:46:32.678 INFO:teuthology.orchestra.run.vm08.stdout: Preparing : 1/1 2026-03-10T06:46:32.678 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T06:46:32.774 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T06:46:32.774 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T06:46:32.774 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-10T06:46:32.774 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T06:46:32.774 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-10T06:46:32.774 INFO:teuthology.orchestra.run.vm09.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 775 k 2026-03-10T06:46:32.774 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:46:32.774 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-10T06:46:32.774 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T06:46:32.774 INFO:teuthology.orchestra.run.vm09.stdout:Remove 1 Package 2026-03-10T06:46:32.774 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:46:32.774 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 775 k 2026-03-10T06:46:32.774 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-10T06:46:32.776 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-10T06:46:32.776 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-10T06:46:32.777 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-10T06:46:32.777 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T06:46:32.777 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-10T06:46:32.778 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T06:46:32.778 INFO:teuthology.orchestra.run.vm01.stdout: Package Arch Version Repository Size 2026-03-10T06:46:32.778 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T06:46:32.778 INFO:teuthology.orchestra.run.vm01.stdout:Removing: 2026-03-10T06:46:32.778 INFO:teuthology.orchestra.run.vm01.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 775 k 2026-03-10T06:46:32.778 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:46:32.778 INFO:teuthology.orchestra.run.vm01.stdout:Transaction Summary 2026-03-10T06:46:32.778 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T06:46:32.778 INFO:teuthology.orchestra.run.vm01.stdout:Remove 1 Package 2026-03-10T06:46:32.778 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:46:32.778 INFO:teuthology.orchestra.run.vm01.stdout:Freed space: 775 k 2026-03-10T06:46:32.778 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction check 2026-03-10T06:46:32.780 INFO:teuthology.orchestra.run.vm01.stdout:Transaction check succeeded. 2026-03-10T06:46:32.780 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction test 2026-03-10T06:46:32.781 INFO:teuthology.orchestra.run.vm01.stdout:Transaction test succeeded. 2026-03-10T06:46:32.782 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction 2026-03-10T06:46:32.789 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T06:46:32.795 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-10T06:46:32.795 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T06:46:32.801 INFO:teuthology.orchestra.run.vm01.stdout: Preparing : 1/1 2026-03-10T06:46:32.801 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T06:46:32.832 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T06:46:32.832 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:46:32.832 INFO:teuthology.orchestra.run.vm08.stdout:Removed: 2026-03-10T06:46:32.832 INFO:teuthology.orchestra.run.vm08.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:46:32.832 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:46:32.832 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:46:32.907 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T06:46:32.914 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T06:46:32.951 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T06:46:32.951 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:46:32.951 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-10T06:46:32.951 INFO:teuthology.orchestra.run.vm09.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:46:32.951 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:46:32.951 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T06:46:32.951 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T06:46:32.951 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:46:32.951 INFO:teuthology.orchestra.run.vm01.stdout:Removed: 2026-03-10T06:46:32.951 INFO:teuthology.orchestra.run.vm01.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:46:32.951 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:46:32.951 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T06:46:33.034 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: ceph-immutable-object-cache 2026-03-10T06:46:33.034 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T06:46:33.037 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:46:33.037 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T06:46:33.037 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:46:33.134 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: ceph-immutable-object-cache 2026-03-10T06:46:33.135 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T06:46:33.138 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T06:46:33.138 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: ceph-immutable-object-cache 2026-03-10T06:46:33.138 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T06:46:33.139 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T06:46:33.139 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T06:46:33.142 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T06:46:33.142 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T06:46:33.142 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T06:46:33.208 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: ceph-mgr 2026-03-10T06:46:33.208 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T06:46:33.211 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:46:33.211 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T06:46:33.212 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:46:33.312 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: ceph-mgr 2026-03-10T06:46:33.312 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T06:46:33.315 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T06:46:33.315 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T06:46:33.315 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T06:46:33.322 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: ceph-mgr 2026-03-10T06:46:33.323 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T06:46:33.326 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T06:46:33.326 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T06:46:33.326 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T06:46:33.390 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: ceph-mgr-dashboard 2026-03-10T06:46:33.390 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T06:46:33.393 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:46:33.394 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T06:46:33.394 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:46:33.489 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: ceph-mgr-dashboard 2026-03-10T06:46:33.489 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T06:46:33.493 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T06:46:33.493 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T06:46:33.493 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T06:46:33.506 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: ceph-mgr-dashboard 2026-03-10T06:46:33.506 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T06:46:33.510 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T06:46:33.510 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T06:46:33.510 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T06:46:33.575 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: ceph-mgr-diskprediction-local 2026-03-10T06:46:33.575 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T06:46:33.578 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:46:33.578 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T06:46:33.578 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:46:33.665 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: ceph-mgr-diskprediction-local 2026-03-10T06:46:33.665 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T06:46:33.669 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T06:46:33.669 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T06:46:33.669 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T06:46:33.694 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: ceph-mgr-diskprediction-local 2026-03-10T06:46:33.694 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T06:46:33.697 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T06:46:33.697 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T06:46:33.697 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T06:46:33.757 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: ceph-mgr-rook 2026-03-10T06:46:33.757 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T06:46:33.761 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:46:33.762 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T06:46:33.762 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:46:33.852 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: ceph-mgr-rook 2026-03-10T06:46:33.852 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T06:46:33.856 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T06:46:33.856 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T06:46:33.856 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T06:46:33.884 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: ceph-mgr-rook 2026-03-10T06:46:33.884 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T06:46:33.887 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T06:46:33.888 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T06:46:33.888 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T06:46:33.943 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: ceph-mgr-cephadm 2026-03-10T06:46:33.943 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T06:46:33.946 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:46:33.947 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T06:46:33.947 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:46:34.032 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: ceph-mgr-cephadm 2026-03-10T06:46:34.032 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T06:46:34.035 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T06:46:34.036 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T06:46:34.036 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T06:46:34.063 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: ceph-mgr-cephadm 2026-03-10T06:46:34.063 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T06:46:34.066 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T06:46:34.066 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T06:46:34.067 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T06:46:34.133 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:46:34.134 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:46:34.134 INFO:teuthology.orchestra.run.vm08.stdout: Package Arch Version Repository Size 2026-03-10T06:46:34.134 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:46:34.134 INFO:teuthology.orchestra.run.vm08.stdout:Removing: 2026-03-10T06:46:34.134 INFO:teuthology.orchestra.run.vm08.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.6 M 2026-03-10T06:46:34.134 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:46:34.134 INFO:teuthology.orchestra.run.vm08.stdout:Transaction Summary 2026-03-10T06:46:34.134 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:46:34.134 INFO:teuthology.orchestra.run.vm08.stdout:Remove 1 Package 2026-03-10T06:46:34.134 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:46:34.134 INFO:teuthology.orchestra.run.vm08.stdout:Freed space: 3.6 M 2026-03-10T06:46:34.134 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction check 2026-03-10T06:46:34.136 INFO:teuthology.orchestra.run.vm08.stdout:Transaction check succeeded. 2026-03-10T06:46:34.136 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction test 2026-03-10T06:46:34.146 INFO:teuthology.orchestra.run.vm08.stdout:Transaction test succeeded. 2026-03-10T06:46:34.146 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction 2026-03-10T06:46:34.171 INFO:teuthology.orchestra.run.vm08.stdout: Preparing : 1/1 2026-03-10T06:46:34.186 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T06:46:34.226 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T06:46:34.227 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T06:46:34.227 INFO:teuthology.orchestra.run.vm01.stdout: Package Arch Version Repository Size 2026-03-10T06:46:34.227 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T06:46:34.227 INFO:teuthology.orchestra.run.vm01.stdout:Removing: 2026-03-10T06:46:34.227 INFO:teuthology.orchestra.run.vm01.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.6 M 2026-03-10T06:46:34.227 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:46:34.227 INFO:teuthology.orchestra.run.vm01.stdout:Transaction Summary 2026-03-10T06:46:34.227 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T06:46:34.227 INFO:teuthology.orchestra.run.vm01.stdout:Remove 1 Package 2026-03-10T06:46:34.227 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:46:34.227 INFO:teuthology.orchestra.run.vm01.stdout:Freed space: 3.6 M 2026-03-10T06:46:34.227 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction check 2026-03-10T06:46:34.229 INFO:teuthology.orchestra.run.vm01.stdout:Transaction check succeeded. 2026-03-10T06:46:34.229 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction test 2026-03-10T06:46:34.238 INFO:teuthology.orchestra.run.vm01.stdout:Transaction test succeeded. 2026-03-10T06:46:34.238 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction 2026-03-10T06:46:34.251 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T06:46:34.262 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T06:46:34.262 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T06:46:34.262 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-10T06:46:34.262 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T06:46:34.262 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-10T06:46:34.262 INFO:teuthology.orchestra.run.vm09.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.6 M 2026-03-10T06:46:34.262 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:46:34.262 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-10T06:46:34.262 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T06:46:34.262 INFO:teuthology.orchestra.run.vm09.stdout:Remove 1 Package 2026-03-10T06:46:34.262 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:46:34.262 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 3.6 M 2026-03-10T06:46:34.262 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-10T06:46:34.264 INFO:teuthology.orchestra.run.vm01.stdout: Preparing : 1/1 2026-03-10T06:46:34.264 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-10T06:46:34.264 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-10T06:46:34.274 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-10T06:46:34.274 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-10T06:46:34.279 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T06:46:34.295 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T06:46:34.295 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:46:34.295 INFO:teuthology.orchestra.run.vm08.stdout:Removed: 2026-03-10T06:46:34.295 INFO:teuthology.orchestra.run.vm08.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:34.295 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:46:34.295 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:46:34.299 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-10T06:46:34.313 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T06:46:34.353 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T06:46:34.373 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T06:46:34.404 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T06:46:34.404 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:46:34.404 INFO:teuthology.orchestra.run.vm01.stdout:Removed: 2026-03-10T06:46:34.404 INFO:teuthology.orchestra.run.vm01.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:34.404 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:46:34.404 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T06:46:34.419 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T06:46:34.419 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:46:34.419 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-10T06:46:34.419 INFO:teuthology.orchestra.run.vm09.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:34.419 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:46:34.419 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T06:46:34.485 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: ceph-volume 2026-03-10T06:46:34.485 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T06:46:34.488 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:46:34.489 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T06:46:34.489 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:46:34.599 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: ceph-volume 2026-03-10T06:46:34.599 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T06:46:34.600 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: ceph-volume 2026-03-10T06:46:34.600 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T06:46:34.602 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T06:46:34.603 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T06:46:34.603 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T06:46:34.604 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T06:46:34.604 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T06:46:34.604 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T06:46:34.677 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:46:34.677 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:46:34.677 INFO:teuthology.orchestra.run.vm08.stdout: Package Arch Version Repo Size 2026-03-10T06:46:34.677 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:46:34.677 INFO:teuthology.orchestra.run.vm08.stdout:Removing: 2026-03-10T06:46:34.677 INFO:teuthology.orchestra.run.vm08.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 456 k 2026-03-10T06:46:34.677 INFO:teuthology.orchestra.run.vm08.stdout:Removing dependent packages: 2026-03-10T06:46:34.677 INFO:teuthology.orchestra.run.vm08.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 153 k 2026-03-10T06:46:34.677 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:46:34.677 INFO:teuthology.orchestra.run.vm08.stdout:Transaction Summary 2026-03-10T06:46:34.677 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:46:34.677 INFO:teuthology.orchestra.run.vm08.stdout:Remove 2 Packages 2026-03-10T06:46:34.678 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:46:34.678 INFO:teuthology.orchestra.run.vm08.stdout:Freed space: 610 k 2026-03-10T06:46:34.678 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction check 2026-03-10T06:46:34.679 INFO:teuthology.orchestra.run.vm08.stdout:Transaction check succeeded. 2026-03-10T06:46:34.680 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction test 2026-03-10T06:46:34.689 INFO:teuthology.orchestra.run.vm08.stdout:Transaction test succeeded. 2026-03-10T06:46:34.690 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction 2026-03-10T06:46:34.715 INFO:teuthology.orchestra.run.vm08.stdout: Preparing : 1/1 2026-03-10T06:46:34.718 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T06:46:34.731 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T06:46:34.790 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T06:46:34.791 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T06:46:34.791 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repo Size 2026-03-10T06:46:34.791 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T06:46:34.791 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-10T06:46:34.791 INFO:teuthology.orchestra.run.vm09.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 456 k 2026-03-10T06:46:34.791 INFO:teuthology.orchestra.run.vm09.stdout:Removing dependent packages: 2026-03-10T06:46:34.791 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 153 k 2026-03-10T06:46:34.791 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:46:34.791 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-10T06:46:34.791 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T06:46:34.791 INFO:teuthology.orchestra.run.vm09.stdout:Remove 2 Packages 2026-03-10T06:46:34.791 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:46:34.791 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 610 k 2026-03-10T06:46:34.791 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-10T06:46:34.791 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T06:46:34.792 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T06:46:34.792 INFO:teuthology.orchestra.run.vm01.stdout: Package Arch Version Repo Size 2026-03-10T06:46:34.792 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T06:46:34.792 INFO:teuthology.orchestra.run.vm01.stdout:Removing: 2026-03-10T06:46:34.792 INFO:teuthology.orchestra.run.vm01.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 456 k 2026-03-10T06:46:34.792 INFO:teuthology.orchestra.run.vm01.stdout:Removing dependent packages: 2026-03-10T06:46:34.792 INFO:teuthology.orchestra.run.vm01.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 153 k 2026-03-10T06:46:34.792 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:46:34.792 INFO:teuthology.orchestra.run.vm01.stdout:Transaction Summary 2026-03-10T06:46:34.792 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T06:46:34.792 INFO:teuthology.orchestra.run.vm01.stdout:Remove 2 Packages 2026-03-10T06:46:34.792 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:46:34.792 INFO:teuthology.orchestra.run.vm01.stdout:Freed space: 610 k 2026-03-10T06:46:34.792 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction check 2026-03-10T06:46:34.793 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-10T06:46:34.793 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-10T06:46:34.794 INFO:teuthology.orchestra.run.vm01.stdout:Transaction check succeeded. 2026-03-10T06:46:34.794 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction test 2026-03-10T06:46:34.802 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T06:46:34.803 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T06:46:34.804 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-10T06:46:34.804 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-10T06:46:34.804 INFO:teuthology.orchestra.run.vm01.stdout:Transaction test succeeded. 2026-03-10T06:46:34.804 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction 2026-03-10T06:46:34.829 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-10T06:46:34.829 INFO:teuthology.orchestra.run.vm01.stdout: Preparing : 1/1 2026-03-10T06:46:34.832 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T06:46:34.832 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T06:46:34.845 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T06:46:34.845 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T06:46:34.846 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T06:46:34.846 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:46:34.847 INFO:teuthology.orchestra.run.vm08.stdout:Removed: 2026-03-10T06:46:34.847 INFO:teuthology.orchestra.run.vm08.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:34.847 INFO:teuthology.orchestra.run.vm08.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:34.847 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:46:34.847 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:46:34.911 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T06:46:34.911 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T06:46:34.916 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T06:46:34.916 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T06:46:34.956 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T06:46:34.956 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:46:34.956 INFO:teuthology.orchestra.run.vm01.stdout:Removed: 2026-03-10T06:46:34.957 INFO:teuthology.orchestra.run.vm01.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:34.957 INFO:teuthology.orchestra.run.vm01.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:34.957 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:46:34.957 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T06:46:34.966 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T06:46:34.966 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:46:34.966 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-10T06:46:34.966 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:34.966 INFO:teuthology.orchestra.run.vm09.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:34.966 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:46:34.966 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T06:46:35.046 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:46:35.046 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:46:35.046 INFO:teuthology.orchestra.run.vm08.stdout: Package Arch Version Repo Size 2026-03-10T06:46:35.046 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:46:35.046 INFO:teuthology.orchestra.run.vm08.stdout:Removing: 2026-03-10T06:46:35.046 INFO:teuthology.orchestra.run.vm08.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.0 M 2026-03-10T06:46:35.046 INFO:teuthology.orchestra.run.vm08.stdout:Removing dependent packages: 2026-03-10T06:46:35.047 INFO:teuthology.orchestra.run.vm08.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 514 k 2026-03-10T06:46:35.047 INFO:teuthology.orchestra.run.vm08.stdout:Removing unused dependencies: 2026-03-10T06:46:35.047 INFO:teuthology.orchestra.run.vm08.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 187 k 2026-03-10T06:46:35.047 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:46:35.047 INFO:teuthology.orchestra.run.vm08.stdout:Transaction Summary 2026-03-10T06:46:35.047 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:46:35.047 INFO:teuthology.orchestra.run.vm08.stdout:Remove 3 Packages 2026-03-10T06:46:35.047 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:46:35.047 INFO:teuthology.orchestra.run.vm08.stdout:Freed space: 3.7 M 2026-03-10T06:46:35.047 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction check 2026-03-10T06:46:35.048 INFO:teuthology.orchestra.run.vm08.stdout:Transaction check succeeded. 2026-03-10T06:46:35.048 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction test 2026-03-10T06:46:35.064 INFO:teuthology.orchestra.run.vm08.stdout:Transaction test succeeded. 2026-03-10T06:46:35.065 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction 2026-03-10T06:46:35.095 INFO:teuthology.orchestra.run.vm08.stdout: Preparing : 1/1 2026-03-10T06:46:35.097 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-10T06:46:35.099 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-10T06:46:35.099 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T06:46:35.147 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T06:46:35.147 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T06:46:35.147 INFO:teuthology.orchestra.run.vm01.stdout: Package Arch Version Repo Size 2026-03-10T06:46:35.147 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T06:46:35.147 INFO:teuthology.orchestra.run.vm01.stdout:Removing: 2026-03-10T06:46:35.147 INFO:teuthology.orchestra.run.vm01.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.0 M 2026-03-10T06:46:35.147 INFO:teuthology.orchestra.run.vm01.stdout:Removing dependent packages: 2026-03-10T06:46:35.147 INFO:teuthology.orchestra.run.vm01.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 514 k 2026-03-10T06:46:35.147 INFO:teuthology.orchestra.run.vm01.stdout:Removing unused dependencies: 2026-03-10T06:46:35.148 INFO:teuthology.orchestra.run.vm01.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 187 k 2026-03-10T06:46:35.148 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:46:35.148 INFO:teuthology.orchestra.run.vm01.stdout:Transaction Summary 2026-03-10T06:46:35.148 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T06:46:35.148 INFO:teuthology.orchestra.run.vm01.stdout:Remove 3 Packages 2026-03-10T06:46:35.148 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:46:35.148 INFO:teuthology.orchestra.run.vm01.stdout:Freed space: 3.7 M 2026-03-10T06:46:35.148 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction check 2026-03-10T06:46:35.150 INFO:teuthology.orchestra.run.vm01.stdout:Transaction check succeeded. 2026-03-10T06:46:35.150 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction test 2026-03-10T06:46:35.159 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T06:46:35.160 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T06:46:35.160 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repo Size 2026-03-10T06:46:35.160 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T06:46:35.160 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-10T06:46:35.160 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.0 M 2026-03-10T06:46:35.160 INFO:teuthology.orchestra.run.vm09.stdout:Removing dependent packages: 2026-03-10T06:46:35.160 INFO:teuthology.orchestra.run.vm09.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 514 k 2026-03-10T06:46:35.160 INFO:teuthology.orchestra.run.vm09.stdout:Removing unused dependencies: 2026-03-10T06:46:35.160 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 187 k 2026-03-10T06:46:35.160 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:46:35.160 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-10T06:46:35.160 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T06:46:35.160 INFO:teuthology.orchestra.run.vm09.stdout:Remove 3 Packages 2026-03-10T06:46:35.160 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:46:35.160 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 3.7 M 2026-03-10T06:46:35.160 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-10T06:46:35.162 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-10T06:46:35.162 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-10T06:46:35.163 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T06:46:35.164 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-10T06:46:35.164 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-10T06:46:35.166 INFO:teuthology.orchestra.run.vm01.stdout:Transaction test succeeded. 2026-03-10T06:46:35.166 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction 2026-03-10T06:46:35.179 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-10T06:46:35.179 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-10T06:46:35.199 INFO:teuthology.orchestra.run.vm01.stdout: Preparing : 1/1 2026-03-10T06:46:35.201 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-10T06:46:35.202 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-10T06:46:35.202 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T06:46:35.205 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T06:46:35.205 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:46:35.205 INFO:teuthology.orchestra.run.vm08.stdout:Removed: 2026-03-10T06:46:35.205 INFO:teuthology.orchestra.run.vm08.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:35.205 INFO:teuthology.orchestra.run.vm08.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:35.205 INFO:teuthology.orchestra.run.vm08.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:35.205 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:46:35.205 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:46:35.211 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-10T06:46:35.213 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-10T06:46:35.214 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-10T06:46:35.214 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T06:46:35.268 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T06:46:35.268 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-10T06:46:35.268 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-10T06:46:35.279 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T06:46:35.280 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-10T06:46:35.280 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-10T06:46:35.306 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T06:46:35.306 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:46:35.306 INFO:teuthology.orchestra.run.vm01.stdout:Removed: 2026-03-10T06:46:35.306 INFO:teuthology.orchestra.run.vm01.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:35.306 INFO:teuthology.orchestra.run.vm01.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:35.306 INFO:teuthology.orchestra.run.vm01.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:35.306 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:46:35.306 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T06:46:35.318 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T06:46:35.318 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:46:35.318 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-10T06:46:35.318 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:35.318 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:35.318 INFO:teuthology.orchestra.run.vm09.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:35.318 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:46:35.318 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T06:46:35.386 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: libcephfs-devel 2026-03-10T06:46:35.386 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T06:46:35.389 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:46:35.390 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T06:46:35.390 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:46:35.480 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: libcephfs-devel 2026-03-10T06:46:35.480 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T06:46:35.483 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T06:46:35.484 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T06:46:35.484 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T06:46:35.498 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: libcephfs-devel 2026-03-10T06:46:35.498 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T06:46:35.501 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T06:46:35.502 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T06:46:35.502 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T06:46:35.586 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:46:35.587 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:46:35.587 INFO:teuthology.orchestra.run.vm08.stdout: Package Arch Version Repository Size 2026-03-10T06:46:35.587 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:46:35.587 INFO:teuthology.orchestra.run.vm08.stdout:Removing: 2026-03-10T06:46:35.587 INFO:teuthology.orchestra.run.vm08.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 12 M 2026-03-10T06:46:35.587 INFO:teuthology.orchestra.run.vm08.stdout:Removing dependent packages: 2026-03-10T06:46:35.587 INFO:teuthology.orchestra.run.vm08.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-10T06:46:35.587 INFO:teuthology.orchestra.run.vm08.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-10T06:46:35.587 INFO:teuthology.orchestra.run.vm08.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 265 k 2026-03-10T06:46:35.587 INFO:teuthology.orchestra.run.vm08.stdout: qemu-kvm-block-rbd x86_64 17:10.1.0-15.el9 @appstream 37 k 2026-03-10T06:46:35.587 INFO:teuthology.orchestra.run.vm08.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 227 k 2026-03-10T06:46:35.588 INFO:teuthology.orchestra.run.vm08.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 490 k 2026-03-10T06:46:35.588 INFO:teuthology.orchestra.run.vm08.stdout:Removing unused dependencies: 2026-03-10T06:46:35.588 INFO:teuthology.orchestra.run.vm08.stdout: boost-program-options x86_64 1.75.0-13.el9 @appstream 276 k 2026-03-10T06:46:35.588 INFO:teuthology.orchestra.run.vm08.stdout: libarrow x86_64 9.0.0-15.el9 @epel 18 M 2026-03-10T06:46:35.588 INFO:teuthology.orchestra.run.vm08.stdout: libarrow-doc noarch 9.0.0-15.el9 @epel 122 k 2026-03-10T06:46:35.588 INFO:teuthology.orchestra.run.vm08.stdout: libnbd x86_64 1.20.3-4.el9 @appstream 453 k 2026-03-10T06:46:35.588 INFO:teuthology.orchestra.run.vm08.stdout: libpmemobj x86_64 1.12.1-1.el9 @appstream 383 k 2026-03-10T06:46:35.588 INFO:teuthology.orchestra.run.vm08.stdout: librabbitmq x86_64 0.11.0-7.el9 @appstream 102 k 2026-03-10T06:46:35.588 INFO:teuthology.orchestra.run.vm08.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-10T06:46:35.588 INFO:teuthology.orchestra.run.vm08.stdout: librdkafka x86_64 1.6.1-102.el9 @appstream 2.0 M 2026-03-10T06:46:35.588 INFO:teuthology.orchestra.run.vm08.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 19 M 2026-03-10T06:46:35.588 INFO:teuthology.orchestra.run.vm08.stdout: lttng-ust x86_64 2.12.0-6.el9 @appstream 1.0 M 2026-03-10T06:46:35.588 INFO:teuthology.orchestra.run.vm08.stdout: parquet-libs x86_64 9.0.0-15.el9 @epel 2.8 M 2026-03-10T06:46:35.588 INFO:teuthology.orchestra.run.vm08.stdout: re2 x86_64 1:20211101-20.el9 @epel 472 k 2026-03-10T06:46:35.588 INFO:teuthology.orchestra.run.vm08.stdout: thrift x86_64 0.15.0-4.el9 @epel 4.8 M 2026-03-10T06:46:35.588 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:46:35.588 INFO:teuthology.orchestra.run.vm08.stdout:Transaction Summary 2026-03-10T06:46:35.588 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:46:35.588 INFO:teuthology.orchestra.run.vm08.stdout:Remove 20 Packages 2026-03-10T06:46:35.588 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:46:35.588 INFO:teuthology.orchestra.run.vm08.stdout:Freed space: 79 M 2026-03-10T06:46:35.588 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction check 2026-03-10T06:46:35.593 INFO:teuthology.orchestra.run.vm08.stdout:Transaction check succeeded. 2026-03-10T06:46:35.593 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction test 2026-03-10T06:46:35.615 INFO:teuthology.orchestra.run.vm08.stdout:Transaction test succeeded. 2026-03-10T06:46:35.615 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction 2026-03-10T06:46:35.697 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T06:46:35.699 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T06:46:35.699 INFO:teuthology.orchestra.run.vm01.stdout: Package Arch Version Repository Size 2026-03-10T06:46:35.699 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T06:46:35.699 INFO:teuthology.orchestra.run.vm01.stdout:Removing: 2026-03-10T06:46:35.699 INFO:teuthology.orchestra.run.vm01.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 12 M 2026-03-10T06:46:35.699 INFO:teuthology.orchestra.run.vm01.stdout:Removing dependent packages: 2026-03-10T06:46:35.699 INFO:teuthology.orchestra.run.vm01.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-10T06:46:35.699 INFO:teuthology.orchestra.run.vm01.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-10T06:46:35.699 INFO:teuthology.orchestra.run.vm01.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 265 k 2026-03-10T06:46:35.699 INFO:teuthology.orchestra.run.vm01.stdout: qemu-kvm-block-rbd x86_64 17:10.1.0-15.el9 @appstream 37 k 2026-03-10T06:46:35.699 INFO:teuthology.orchestra.run.vm01.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 227 k 2026-03-10T06:46:35.699 INFO:teuthology.orchestra.run.vm01.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 490 k 2026-03-10T06:46:35.699 INFO:teuthology.orchestra.run.vm01.stdout:Removing unused dependencies: 2026-03-10T06:46:35.699 INFO:teuthology.orchestra.run.vm01.stdout: boost-program-options x86_64 1.75.0-13.el9 @appstream 276 k 2026-03-10T06:46:35.699 INFO:teuthology.orchestra.run.vm01.stdout: libarrow x86_64 9.0.0-15.el9 @epel 18 M 2026-03-10T06:46:35.699 INFO:teuthology.orchestra.run.vm01.stdout: libarrow-doc noarch 9.0.0-15.el9 @epel 122 k 2026-03-10T06:46:35.699 INFO:teuthology.orchestra.run.vm01.stdout: libnbd x86_64 1.20.3-4.el9 @appstream 453 k 2026-03-10T06:46:35.699 INFO:teuthology.orchestra.run.vm01.stdout: libpmemobj x86_64 1.12.1-1.el9 @appstream 383 k 2026-03-10T06:46:35.699 INFO:teuthology.orchestra.run.vm01.stdout: librabbitmq x86_64 0.11.0-7.el9 @appstream 102 k 2026-03-10T06:46:35.699 INFO:teuthology.orchestra.run.vm01.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-10T06:46:35.699 INFO:teuthology.orchestra.run.vm01.stdout: librdkafka x86_64 1.6.1-102.el9 @appstream 2.0 M 2026-03-10T06:46:35.699 INFO:teuthology.orchestra.run.vm01.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 19 M 2026-03-10T06:46:35.699 INFO:teuthology.orchestra.run.vm01.stdout: lttng-ust x86_64 2.12.0-6.el9 @appstream 1.0 M 2026-03-10T06:46:35.699 INFO:teuthology.orchestra.run.vm01.stdout: parquet-libs x86_64 9.0.0-15.el9 @epel 2.8 M 2026-03-10T06:46:35.699 INFO:teuthology.orchestra.run.vm01.stdout: re2 x86_64 1:20211101-20.el9 @epel 472 k 2026-03-10T06:46:35.699 INFO:teuthology.orchestra.run.vm01.stdout: thrift x86_64 0.15.0-4.el9 @epel 4.8 M 2026-03-10T06:46:35.699 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:46:35.699 INFO:teuthology.orchestra.run.vm01.stdout:Transaction Summary 2026-03-10T06:46:35.699 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T06:46:35.699 INFO:teuthology.orchestra.run.vm01.stdout:Remove 20 Packages 2026-03-10T06:46:35.699 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:46:35.699 INFO:teuthology.orchestra.run.vm01.stdout:Freed space: 79 M 2026-03-10T06:46:35.699 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction check 2026-03-10T06:46:35.704 INFO:teuthology.orchestra.run.vm01.stdout:Transaction check succeeded. 2026-03-10T06:46:35.704 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction test 2026-03-10T06:46:35.707 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T06:46:35.709 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T06:46:35.709 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-10T06:46:35.709 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T06:46:35.709 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-10T06:46:35.709 INFO:teuthology.orchestra.run.vm09.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 12 M 2026-03-10T06:46:35.709 INFO:teuthology.orchestra.run.vm09.stdout:Removing dependent packages: 2026-03-10T06:46:35.709 INFO:teuthology.orchestra.run.vm09.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-10T06:46:35.709 INFO:teuthology.orchestra.run.vm09.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-10T06:46:35.709 INFO:teuthology.orchestra.run.vm09.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 265 k 2026-03-10T06:46:35.709 INFO:teuthology.orchestra.run.vm09.stdout: qemu-kvm-block-rbd x86_64 17:10.1.0-15.el9 @appstream 37 k 2026-03-10T06:46:35.709 INFO:teuthology.orchestra.run.vm09.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 227 k 2026-03-10T06:46:35.709 INFO:teuthology.orchestra.run.vm09.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 490 k 2026-03-10T06:46:35.709 INFO:teuthology.orchestra.run.vm09.stdout:Removing unused dependencies: 2026-03-10T06:46:35.709 INFO:teuthology.orchestra.run.vm09.stdout: boost-program-options x86_64 1.75.0-13.el9 @appstream 276 k 2026-03-10T06:46:35.709 INFO:teuthology.orchestra.run.vm09.stdout: libarrow x86_64 9.0.0-15.el9 @epel 18 M 2026-03-10T06:46:35.709 INFO:teuthology.orchestra.run.vm09.stdout: libarrow-doc noarch 9.0.0-15.el9 @epel 122 k 2026-03-10T06:46:35.709 INFO:teuthology.orchestra.run.vm09.stdout: libnbd x86_64 1.20.3-4.el9 @appstream 453 k 2026-03-10T06:46:35.709 INFO:teuthology.orchestra.run.vm09.stdout: libpmemobj x86_64 1.12.1-1.el9 @appstream 383 k 2026-03-10T06:46:35.709 INFO:teuthology.orchestra.run.vm09.stdout: librabbitmq x86_64 0.11.0-7.el9 @appstream 102 k 2026-03-10T06:46:35.709 INFO:teuthology.orchestra.run.vm09.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-10T06:46:35.709 INFO:teuthology.orchestra.run.vm09.stdout: librdkafka x86_64 1.6.1-102.el9 @appstream 2.0 M 2026-03-10T06:46:35.710 INFO:teuthology.orchestra.run.vm09.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 19 M 2026-03-10T06:46:35.710 INFO:teuthology.orchestra.run.vm09.stdout: lttng-ust x86_64 2.12.0-6.el9 @appstream 1.0 M 2026-03-10T06:46:35.710 INFO:teuthology.orchestra.run.vm09.stdout: parquet-libs x86_64 9.0.0-15.el9 @epel 2.8 M 2026-03-10T06:46:35.710 INFO:teuthology.orchestra.run.vm09.stdout: re2 x86_64 1:20211101-20.el9 @epel 472 k 2026-03-10T06:46:35.710 INFO:teuthology.orchestra.run.vm09.stdout: thrift x86_64 0.15.0-4.el9 @epel 4.8 M 2026-03-10T06:46:35.710 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:46:35.710 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-10T06:46:35.710 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T06:46:35.710 INFO:teuthology.orchestra.run.vm09.stdout:Remove 20 Packages 2026-03-10T06:46:35.710 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:46:35.710 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 79 M 2026-03-10T06:46:35.710 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-10T06:46:35.710 INFO:teuthology.orchestra.run.vm08.stdout: Preparing : 1/1 2026-03-10T06:46:35.713 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 1/20 2026-03-10T06:46:35.714 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-10T06:46:35.714 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-10T06:46:35.715 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2/20 2026-03-10T06:46:35.719 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 3/20 2026-03-10T06:46:35.719 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-10T06:46:35.727 INFO:teuthology.orchestra.run.vm01.stdout:Transaction test succeeded. 2026-03-10T06:46:35.727 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction 2026-03-10T06:46:35.733 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-10T06:46:35.735 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : parquet-libs-9.0.0-15.el9.x86_64 5/20 2026-03-10T06:46:35.736 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-10T06:46:35.737 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-10T06:46:35.737 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 6/20 2026-03-10T06:46:35.739 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-10T06:46:35.740 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 8/20 2026-03-10T06:46:35.743 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libarrow-doc-9.0.0-15.el9.noarch 9/20 2026-03-10T06:46:35.743 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T06:46:35.757 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T06:46:35.757 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-10T06:46:35.757 INFO:teuthology.orchestra.run.vm08.stdout:warning: file /etc/ceph: remove failed: No such file or directory 2026-03-10T06:46:35.757 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:46:35.769 INFO:teuthology.orchestra.run.vm01.stdout: Preparing : 1/1 2026-03-10T06:46:35.771 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-10T06:46:35.772 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 1/20 2026-03-10T06:46:35.773 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libarrow-9.0.0-15.el9.x86_64 12/20 2026-03-10T06:46:35.774 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2/20 2026-03-10T06:46:35.777 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : re2-1:20211101-20.el9.x86_64 13/20 2026-03-10T06:46:35.778 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-10T06:46:35.778 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 3/20 2026-03-10T06:46:35.778 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-10T06:46:35.780 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 1/20 2026-03-10T06:46:35.781 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : lttng-ust-2.12.0-6.el9.x86_64 14/20 2026-03-10T06:46:35.782 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2/20 2026-03-10T06:46:35.784 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : thrift-0.15.0-4.el9.x86_64 15/20 2026-03-10T06:46:35.786 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 3/20 2026-03-10T06:46:35.786 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-10T06:46:35.786 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libnbd-1.20.3-4.el9.x86_64 16/20 2026-03-10T06:46:35.788 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libpmemobj-1.12.1-1.el9.x86_64 17/20 2026-03-10T06:46:35.790 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : boost-program-options-1.75.0-13.el9.x86_64 18/20 2026-03-10T06:46:35.791 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-10T06:46:35.792 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : librabbitmq-0.11.0-7.el9.x86_64 19/20 2026-03-10T06:46:35.793 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : parquet-libs-9.0.0-15.el9.x86_64 5/20 2026-03-10T06:46:35.795 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 6/20 2026-03-10T06:46:35.797 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-10T06:46:35.799 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 8/20 2026-03-10T06:46:35.800 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-10T06:46:35.801 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : libarrow-doc-9.0.0-15.el9.noarch 9/20 2026-03-10T06:46:35.801 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T06:46:35.802 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : parquet-libs-9.0.0-15.el9.x86_64 5/20 2026-03-10T06:46:35.804 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 6/20 2026-03-10T06:46:35.805 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-10T06:46:35.806 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-10T06:46:35.807 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 8/20 2026-03-10T06:46:35.809 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libarrow-doc-9.0.0-15.el9.noarch 9/20 2026-03-10T06:46:35.809 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T06:46:35.815 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T06:46:35.816 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-10T06:46:35.816 INFO:teuthology.orchestra.run.vm01.stdout:warning: file /etc/ceph: remove failed: No such file or directory 2026-03-10T06:46:35.816 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:46:35.823 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T06:46:35.824 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-10T06:46:35.824 INFO:teuthology.orchestra.run.vm09.stdout:warning: file /etc/ceph: remove failed: No such file or directory 2026-03-10T06:46:35.824 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:46:35.829 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-10T06:46:35.832 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : libarrow-9.0.0-15.el9.x86_64 12/20 2026-03-10T06:46:35.835 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : re2-1:20211101-20.el9.x86_64 13/20 2026-03-10T06:46:35.839 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-10T06:46:35.839 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : lttng-ust-2.12.0-6.el9.x86_64 14/20 2026-03-10T06:46:35.841 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libarrow-9.0.0-15.el9.x86_64 12/20 2026-03-10T06:46:35.842 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : thrift-0.15.0-4.el9.x86_64 15/20 2026-03-10T06:46:35.844 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : re2-1:20211101-20.el9.x86_64 13/20 2026-03-10T06:46:35.845 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : libnbd-1.20.3-4.el9.x86_64 16/20 2026-03-10T06:46:35.847 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : libpmemobj-1.12.1-1.el9.x86_64 17/20 2026-03-10T06:46:35.848 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : lttng-ust-2.12.0-6.el9.x86_64 14/20 2026-03-10T06:46:35.850 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : boost-program-options-1.75.0-13.el9.x86_64 18/20 2026-03-10T06:46:35.850 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : thrift-0.15.0-4.el9.x86_64 15/20 2026-03-10T06:46:35.852 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : librabbitmq-0.11.0-7.el9.x86_64 19/20 2026-03-10T06:46:35.853 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libnbd-1.20.3-4.el9.x86_64 16/20 2026-03-10T06:46:35.855 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libpmemobj-1.12.1-1.el9.x86_64 17/20 2026-03-10T06:46:35.857 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : boost-program-options-1.75.0-13.el9.x86_64 18/20 2026-03-10T06:46:35.859 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : librabbitmq-0.11.0-7.el9.x86_64 19/20 2026-03-10T06:46:35.866 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-10T06:46:35.868 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-10T06:46:35.868 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 1/20 2026-03-10T06:46:35.868 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 2/20 2026-03-10T06:46:35.868 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 3/20 2026-03-10T06:46:35.868 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 4/20 2026-03-10T06:46:35.868 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 5/20 2026-03-10T06:46:35.869 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 6/20 2026-03-10T06:46:35.869 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-10T06:46:35.869 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 8/20 2026-03-10T06:46:35.869 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 9/20 2026-03-10T06:46:35.869 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T06:46:35.869 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 11/20 2026-03-10T06:46:35.869 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 12/20 2026-03-10T06:46:35.869 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 13/20 2026-03-10T06:46:35.869 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 14/20 2026-03-10T06:46:35.869 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 15/20 2026-03-10T06:46:35.869 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 16/20 2026-03-10T06:46:35.869 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 17/20 2026-03-10T06:46:35.869 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 18/20 2026-03-10T06:46:35.869 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : re2-1:20211101-20.el9.x86_64 19/20 2026-03-10T06:46:35.873 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-10T06:46:35.914 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 20/20 2026-03-10T06:46:35.914 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:46:35.914 INFO:teuthology.orchestra.run.vm08.stdout:Removed: 2026-03-10T06:46:35.914 INFO:teuthology.orchestra.run.vm08.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-10T06:46:35.914 INFO:teuthology.orchestra.run.vm08.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-10T06:46:35.914 INFO:teuthology.orchestra.run.vm08.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-10T06:46:35.914 INFO:teuthology.orchestra.run.vm08.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-10T06:46:35.914 INFO:teuthology.orchestra.run.vm08.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-10T06:46:35.914 INFO:teuthology.orchestra.run.vm08.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-10T06:46:35.914 INFO:teuthology.orchestra.run.vm08.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:35.914 INFO:teuthology.orchestra.run.vm08.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:35.914 INFO:teuthology.orchestra.run.vm08.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-10T06:46:35.914 INFO:teuthology.orchestra.run.vm08.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:35.914 INFO:teuthology.orchestra.run.vm08.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-10T06:46:35.914 INFO:teuthology.orchestra.run.vm08.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-10T06:46:35.914 INFO:teuthology.orchestra.run.vm08.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:35.914 INFO:teuthology.orchestra.run.vm08.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:35.914 INFO:teuthology.orchestra.run.vm08.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:35.914 INFO:teuthology.orchestra.run.vm08.stdout: qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 2026-03-10T06:46:35.914 INFO:teuthology.orchestra.run.vm08.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:35.914 INFO:teuthology.orchestra.run.vm08.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:35.914 INFO:teuthology.orchestra.run.vm08.stdout: re2-1:20211101-20.el9.x86_64 2026-03-10T06:46:35.914 INFO:teuthology.orchestra.run.vm08.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-10T06:46:35.914 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:46:35.914 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:46:35.932 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-10T06:46:35.932 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 1/20 2026-03-10T06:46:35.932 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 2/20 2026-03-10T06:46:35.932 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 3/20 2026-03-10T06:46:35.932 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 4/20 2026-03-10T06:46:35.932 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 5/20 2026-03-10T06:46:35.933 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 6/20 2026-03-10T06:46:35.933 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-10T06:46:35.933 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 8/20 2026-03-10T06:46:35.933 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 9/20 2026-03-10T06:46:35.933 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T06:46:35.933 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 11/20 2026-03-10T06:46:35.933 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 12/20 2026-03-10T06:46:35.933 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 13/20 2026-03-10T06:46:35.933 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 14/20 2026-03-10T06:46:35.933 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 15/20 2026-03-10T06:46:35.933 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 16/20 2026-03-10T06:46:35.933 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 17/20 2026-03-10T06:46:35.933 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 18/20 2026-03-10T06:46:35.933 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : re2-1:20211101-20.el9.x86_64 19/20 2026-03-10T06:46:35.941 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-10T06:46:35.941 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 1/20 2026-03-10T06:46:35.941 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 2/20 2026-03-10T06:46:35.941 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 3/20 2026-03-10T06:46:35.942 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 4/20 2026-03-10T06:46:35.942 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 5/20 2026-03-10T06:46:35.942 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 6/20 2026-03-10T06:46:35.942 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-10T06:46:35.942 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 8/20 2026-03-10T06:46:35.942 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 9/20 2026-03-10T06:46:35.942 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T06:46:35.942 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 11/20 2026-03-10T06:46:35.942 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 12/20 2026-03-10T06:46:35.942 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 13/20 2026-03-10T06:46:35.942 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 14/20 2026-03-10T06:46:35.942 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 15/20 2026-03-10T06:46:35.942 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 16/20 2026-03-10T06:46:35.942 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 17/20 2026-03-10T06:46:35.942 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 18/20 2026-03-10T06:46:35.942 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : re2-1:20211101-20.el9.x86_64 19/20 2026-03-10T06:46:35.980 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 20/20 2026-03-10T06:46:35.980 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:46:35.980 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-10T06:46:35.980 INFO:teuthology.orchestra.run.vm09.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-10T06:46:35.980 INFO:teuthology.orchestra.run.vm09.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-10T06:46:35.980 INFO:teuthology.orchestra.run.vm09.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-10T06:46:35.980 INFO:teuthology.orchestra.run.vm09.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-10T06:46:35.980 INFO:teuthology.orchestra.run.vm09.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-10T06:46:35.980 INFO:teuthology.orchestra.run.vm09.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-10T06:46:35.980 INFO:teuthology.orchestra.run.vm09.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:35.980 INFO:teuthology.orchestra.run.vm09.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:35.980 INFO:teuthology.orchestra.run.vm09.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-10T06:46:35.980 INFO:teuthology.orchestra.run.vm09.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:35.980 INFO:teuthology.orchestra.run.vm09.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-10T06:46:35.980 INFO:teuthology.orchestra.run.vm09.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-10T06:46:35.980 INFO:teuthology.orchestra.run.vm09.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:35.980 INFO:teuthology.orchestra.run.vm09.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:35.980 INFO:teuthology.orchestra.run.vm09.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:35.980 INFO:teuthology.orchestra.run.vm09.stdout: qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 2026-03-10T06:46:35.980 INFO:teuthology.orchestra.run.vm09.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:35.980 INFO:teuthology.orchestra.run.vm09.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:35.980 INFO:teuthology.orchestra.run.vm09.stdout: re2-1:20211101-20.el9.x86_64 2026-03-10T06:46:35.980 INFO:teuthology.orchestra.run.vm09.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-10T06:46:35.980 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T06:46:35.980 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T06:46:35.995 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 20/20 2026-03-10T06:46:35.995 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:46:35.995 INFO:teuthology.orchestra.run.vm01.stdout:Removed: 2026-03-10T06:46:35.995 INFO:teuthology.orchestra.run.vm01.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-10T06:46:35.995 INFO:teuthology.orchestra.run.vm01.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-10T06:46:35.995 INFO:teuthology.orchestra.run.vm01.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-10T06:46:35.995 INFO:teuthology.orchestra.run.vm01.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-10T06:46:35.995 INFO:teuthology.orchestra.run.vm01.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-10T06:46:35.995 INFO:teuthology.orchestra.run.vm01.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-10T06:46:35.995 INFO:teuthology.orchestra.run.vm01.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:35.995 INFO:teuthology.orchestra.run.vm01.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:35.995 INFO:teuthology.orchestra.run.vm01.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-10T06:46:35.995 INFO:teuthology.orchestra.run.vm01.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:35.995 INFO:teuthology.orchestra.run.vm01.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-10T06:46:35.995 INFO:teuthology.orchestra.run.vm01.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-10T06:46:35.995 INFO:teuthology.orchestra.run.vm01.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:35.995 INFO:teuthology.orchestra.run.vm01.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:35.995 INFO:teuthology.orchestra.run.vm01.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:35.995 INFO:teuthology.orchestra.run.vm01.stdout: qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 2026-03-10T06:46:35.995 INFO:teuthology.orchestra.run.vm01.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:35.995 INFO:teuthology.orchestra.run.vm01.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:46:35.995 INFO:teuthology.orchestra.run.vm01.stdout: re2-1:20211101-20.el9.x86_64 2026-03-10T06:46:35.995 INFO:teuthology.orchestra.run.vm01.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-10T06:46:35.995 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:46:35.995 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T06:46:36.153 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: librbd1 2026-03-10T06:46:36.153 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T06:46:36.155 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:46:36.156 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T06:46:36.156 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:46:36.202 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: librbd1 2026-03-10T06:46:36.202 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T06:46:36.203 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: librbd1 2026-03-10T06:46:36.203 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T06:46:36.205 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T06:46:36.205 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T06:46:36.206 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T06:46:36.206 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T06:46:36.206 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T06:46:36.206 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T06:46:36.348 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: python3-rados 2026-03-10T06:46:36.348 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T06:46:36.350 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:46:36.351 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T06:46:36.351 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:46:36.386 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: python3-rados 2026-03-10T06:46:36.386 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T06:46:36.388 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: python3-rados 2026-03-10T06:46:36.388 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T06:46:36.388 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T06:46:36.389 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T06:46:36.389 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T06:46:36.390 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T06:46:36.391 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T06:46:36.391 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T06:46:36.536 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: python3-rgw 2026-03-10T06:46:36.536 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T06:46:36.539 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:46:36.539 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T06:46:36.539 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:46:36.568 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: python3-rgw 2026-03-10T06:46:36.568 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: python3-rgw 2026-03-10T06:46:36.568 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T06:46:36.568 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T06:46:36.570 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T06:46:36.570 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T06:46:36.571 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T06:46:36.571 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T06:46:36.571 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T06:46:36.571 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T06:46:36.719 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: python3-cephfs 2026-03-10T06:46:36.719 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T06:46:36.721 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:46:36.721 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T06:46:36.721 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:46:36.739 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: python3-cephfs 2026-03-10T06:46:36.739 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T06:46:36.741 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T06:46:36.741 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: python3-cephfs 2026-03-10T06:46:36.741 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T06:46:36.742 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T06:46:36.742 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T06:46:36.743 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T06:46:36.744 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T06:46:36.744 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T06:46:36.897 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: python3-rbd 2026-03-10T06:46:36.897 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T06:46:36.900 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:46:36.901 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T06:46:36.901 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:46:36.918 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: python3-rbd 2026-03-10T06:46:36.918 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T06:46:36.921 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T06:46:36.921 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T06:46:36.921 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T06:46:36.922 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: python3-rbd 2026-03-10T06:46:36.922 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T06:46:36.925 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T06:46:36.925 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T06:46:36.926 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T06:46:37.073 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: rbd-fuse 2026-03-10T06:46:37.073 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T06:46:37.075 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:46:37.076 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T06:46:37.076 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:46:37.083 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: rbd-fuse 2026-03-10T06:46:37.083 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T06:46:37.085 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T06:46:37.086 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T06:46:37.086 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T06:46:37.091 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: rbd-fuse 2026-03-10T06:46:37.091 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T06:46:37.093 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T06:46:37.093 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T06:46:37.093 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T06:46:37.245 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: rbd-mirror 2026-03-10T06:46:37.245 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T06:46:37.245 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: rbd-mirror 2026-03-10T06:46:37.246 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T06:46:37.247 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T06:46:37.248 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T06:46:37.248 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T06:46:37.249 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:46:37.249 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T06:46:37.249 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:46:37.262 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: rbd-mirror 2026-03-10T06:46:37.263 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T06:46:37.265 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T06:46:37.265 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T06:46:37.265 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T06:46:37.421 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: rbd-nbd 2026-03-10T06:46:37.421 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T06:46:37.423 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: rbd-nbd 2026-03-10T06:46:37.423 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T06:46:37.423 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T06:46:37.424 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T06:46:37.424 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T06:46:37.426 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:46:37.426 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T06:46:37.426 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:46:37.434 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: rbd-nbd 2026-03-10T06:46:37.434 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T06:46:37.436 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T06:46:37.437 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T06:46:37.437 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T06:46:37.452 DEBUG:teuthology.orchestra.run.vm01:> sudo yum clean all 2026-03-10T06:46:37.455 DEBUG:teuthology.orchestra.run.vm08:> sudo yum clean all 2026-03-10T06:46:37.463 DEBUG:teuthology.orchestra.run.vm09:> sudo yum clean all 2026-03-10T06:46:37.591 INFO:teuthology.orchestra.run.vm08.stdout:56 files removed 2026-03-10T06:46:37.592 INFO:teuthology.orchestra.run.vm01.stdout:56 files removed 2026-03-10T06:46:37.599 INFO:teuthology.orchestra.run.vm09.stdout:56 files removed 2026-03-10T06:46:37.613 DEBUG:teuthology.orchestra.run.vm01:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-10T06:46:37.619 DEBUG:teuthology.orchestra.run.vm08:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-10T06:46:37.625 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-10T06:46:37.641 DEBUG:teuthology.orchestra.run.vm01:> sudo yum clean expire-cache 2026-03-10T06:46:37.644 DEBUG:teuthology.orchestra.run.vm08:> sudo yum clean expire-cache 2026-03-10T06:46:37.650 DEBUG:teuthology.orchestra.run.vm09:> sudo yum clean expire-cache 2026-03-10T06:46:37.799 INFO:teuthology.orchestra.run.vm01.stdout:Cache was expired 2026-03-10T06:46:37.799 INFO:teuthology.orchestra.run.vm01.stdout:0 files removed 2026-03-10T06:46:37.802 INFO:teuthology.orchestra.run.vm08.stdout:Cache was expired 2026-03-10T06:46:37.802 INFO:teuthology.orchestra.run.vm08.stdout:0 files removed 2026-03-10T06:46:37.807 INFO:teuthology.orchestra.run.vm09.stdout:Cache was expired 2026-03-10T06:46:37.807 INFO:teuthology.orchestra.run.vm09.stdout:0 files removed 2026-03-10T06:46:37.823 DEBUG:teuthology.parallel:result is None 2026-03-10T06:46:37.824 DEBUG:teuthology.parallel:result is None 2026-03-10T06:46:37.827 DEBUG:teuthology.parallel:result is None 2026-03-10T06:46:37.827 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm01.local 2026-03-10T06:46:37.827 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm08.local 2026-03-10T06:46:37.827 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm09.local 2026-03-10T06:46:37.827 DEBUG:teuthology.orchestra.run.vm01:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-10T06:46:37.827 DEBUG:teuthology.orchestra.run.vm08:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-10T06:46:37.828 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-10T06:46:37.853 DEBUG:teuthology.orchestra.run.vm08:> sudo mv -f /etc/yum/pluginconf.d/priorities.conf.orig /etc/yum/pluginconf.d/priorities.conf 2026-03-10T06:46:37.854 DEBUG:teuthology.orchestra.run.vm01:> sudo mv -f /etc/yum/pluginconf.d/priorities.conf.orig /etc/yum/pluginconf.d/priorities.conf 2026-03-10T06:46:37.856 DEBUG:teuthology.orchestra.run.vm09:> sudo mv -f /etc/yum/pluginconf.d/priorities.conf.orig /etc/yum/pluginconf.d/priorities.conf 2026-03-10T06:46:37.919 DEBUG:teuthology.parallel:result is None 2026-03-10T06:46:37.920 DEBUG:teuthology.parallel:result is None 2026-03-10T06:46:37.920 DEBUG:teuthology.parallel:result is None 2026-03-10T06:46:37.920 DEBUG:teuthology.run_tasks:Unwinding manager clock 2026-03-10T06:46:37.923 INFO:teuthology.task.clock:Checking final clock skew... 2026-03-10T06:46:37.923 DEBUG:teuthology.orchestra.run.vm01:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T06:46:37.962 DEBUG:teuthology.orchestra.run.vm08:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T06:46:37.964 DEBUG:teuthology.orchestra.run.vm09:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T06:46:37.976 INFO:teuthology.orchestra.run.vm01.stderr:bash: line 1: ntpq: command not found 2026-03-10T06:46:37.976 INFO:teuthology.orchestra.run.vm08.stderr:bash: line 1: ntpq: command not found 2026-03-10T06:46:37.977 INFO:teuthology.orchestra.run.vm09.stderr:bash: line 1: ntpq: command not found 2026-03-10T06:46:37.980 INFO:teuthology.orchestra.run.vm01.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T06:46:37.980 INFO:teuthology.orchestra.run.vm01.stdout:=============================================================================== 2026-03-10T06:46:37.980 INFO:teuthology.orchestra.run.vm01.stdout:^- 141.84.43.73 2 6 377 26 -1151us[-1151us] +/- 27ms 2026-03-10T06:46:37.980 INFO:teuthology.orchestra.run.vm01.stdout:^- festnoz.de 2 6 377 24 -119us[ -119us] +/- 49ms 2026-03-10T06:46:37.980 INFO:teuthology.orchestra.run.vm01.stdout:^* static.222.16.42.77.clie> 2 6 377 27 +220us[ +229us] +/- 3003us 2026-03-10T06:46:37.980 INFO:teuthology.orchestra.run.vm01.stdout:^- time4.beanman.net 2 6 377 25 +1602us[+1602us] +/- 19ms 2026-03-10T06:46:37.980 INFO:teuthology.orchestra.run.vm08.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T06:46:37.980 INFO:teuthology.orchestra.run.vm08.stdout:=============================================================================== 2026-03-10T06:46:37.981 INFO:teuthology.orchestra.run.vm08.stdout:^- 141.84.43.73 2 6 377 26 -2254us[-2253us] +/- 29ms 2026-03-10T06:46:37.981 INFO:teuthology.orchestra.run.vm08.stdout:^- festnoz.de 2 6 377 24 -127us[ -127us] +/- 49ms 2026-03-10T06:46:37.981 INFO:teuthology.orchestra.run.vm08.stdout:^* static.222.16.42.77.clie> 2 6 377 25 -921ns[ +352ns] +/- 2765us 2026-03-10T06:46:37.981 INFO:teuthology.orchestra.run.vm08.stdout:^- time4.beanman.net 2 6 377 26 +1587us[+1589us] +/- 19ms 2026-03-10T06:46:37.981 INFO:teuthology.orchestra.run.vm09.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T06:46:37.981 INFO:teuthology.orchestra.run.vm09.stdout:=============================================================================== 2026-03-10T06:46:37.981 INFO:teuthology.orchestra.run.vm09.stdout:^- time4.beanman.net 2 6 377 26 +1611us[+1600us] +/- 19ms 2026-03-10T06:46:37.981 INFO:teuthology.orchestra.run.vm09.stdout:^- 141.84.43.73 2 6 377 25 -1064us[-1064us] +/- 26ms 2026-03-10T06:46:37.981 INFO:teuthology.orchestra.run.vm09.stdout:^- festnoz.de 2 6 377 26 -74us[ -84us] +/- 49ms 2026-03-10T06:46:37.981 INFO:teuthology.orchestra.run.vm09.stdout:^* static.222.16.42.77.clie> 2 6 377 26 -15us[ -25us] +/- 2741us 2026-03-10T06:46:37.981 DEBUG:teuthology.run_tasks:Unwinding manager ansible.cephlab 2026-03-10T06:46:37.984 INFO:teuthology.task.ansible:Skipping ansible cleanup... 2026-03-10T06:46:37.984 DEBUG:teuthology.run_tasks:Unwinding manager selinux 2026-03-10T06:46:37.986 DEBUG:teuthology.run_tasks:Unwinding manager pcp 2026-03-10T06:46:37.989 DEBUG:teuthology.run_tasks:Unwinding manager internal.timer 2026-03-10T06:46:37.991 INFO:teuthology.task.internal:Duration was 592.850780 seconds 2026-03-10T06:46:37.991 DEBUG:teuthology.run_tasks:Unwinding manager internal.syslog 2026-03-10T06:46:37.993 INFO:teuthology.task.internal.syslog:Shutting down syslog monitoring... 2026-03-10T06:46:37.994 DEBUG:teuthology.orchestra.run.vm01:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-10T06:46:38.023 DEBUG:teuthology.orchestra.run.vm08:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-10T06:46:38.025 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-10T06:46:38.062 INFO:teuthology.orchestra.run.vm01.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T06:46:38.065 INFO:teuthology.orchestra.run.vm08.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T06:46:38.068 INFO:teuthology.orchestra.run.vm09.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T06:46:38.423 INFO:teuthology.task.internal.syslog:Checking logs for errors... 2026-03-10T06:46:38.423 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm01.local 2026-03-10T06:46:38.424 DEBUG:teuthology.orchestra.run.vm01:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-10T06:46:38.451 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm08.local 2026-03-10T06:46:38.451 DEBUG:teuthology.orchestra.run.vm08:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-10T06:46:38.486 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm09.local 2026-03-10T06:46:38.486 DEBUG:teuthology.orchestra.run.vm09:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-10T06:46:38.514 INFO:teuthology.task.internal.syslog:Gathering journactl... 2026-03-10T06:46:38.514 DEBUG:teuthology.orchestra.run.vm01:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T06:46:38.516 DEBUG:teuthology.orchestra.run.vm08:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T06:46:38.529 DEBUG:teuthology.orchestra.run.vm09:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T06:46:38.935 INFO:teuthology.task.internal.syslog:Compressing syslogs... 2026-03-10T06:46:38.936 DEBUG:teuthology.orchestra.run.vm01:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T06:46:38.938 DEBUG:teuthology.orchestra.run.vm08:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T06:46:38.940 DEBUG:teuthology.orchestra.run.vm09:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T06:46:38.960 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T06:46:38.960 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T06:46:38.960 INFO:teuthology.orchestra.run.vm01.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-10T06:46:38.960 INFO:teuthology.orchestra.run.vm01.stderr: 2026-03-10T06:46:38.961 INFO:teuthology.orchestra.run.vm01.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: /home/ubuntu/cephtest/archive/syslog/journalctl.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-10T06:46:38.962 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T06:46:38.963 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T06:46:38.963 INFO:teuthology.orchestra.run.vm08.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-10T06:46:38.963 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T06:46:38.963 INFO:teuthology.orchestra.run.vm08.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-10T06:46:38.965 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T06:46:38.965 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T06:46:38.965 INFO:teuthology.orchestra.run.vm09.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-10T06:46:38.966 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T06:46:38.966 INFO:teuthology.orchestra.run.vm09.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-10T06:46:39.072 INFO:teuthology.orchestra.run.vm08.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 98.3% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-10T06:46:39.078 INFO:teuthology.orchestra.run.vm01.stderr: 98.1% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-10T06:46:39.099 INFO:teuthology.orchestra.run.vm09.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 98.4% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-10T06:46:39.101 DEBUG:teuthology.run_tasks:Unwinding manager internal.sudo 2026-03-10T06:46:39.104 INFO:teuthology.task.internal:Restoring /etc/sudoers... 2026-03-10T06:46:39.104 DEBUG:teuthology.orchestra.run.vm01:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-10T06:46:39.144 DEBUG:teuthology.orchestra.run.vm08:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-10T06:46:39.168 DEBUG:teuthology.orchestra.run.vm09:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-10T06:46:39.195 DEBUG:teuthology.run_tasks:Unwinding manager internal.coredump 2026-03-10T06:46:39.197 DEBUG:teuthology.orchestra.run.vm01:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-10T06:46:39.199 DEBUG:teuthology.orchestra.run.vm08:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-10T06:46:39.210 DEBUG:teuthology.orchestra.run.vm09:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-10T06:46:39.224 INFO:teuthology.orchestra.run.vm01.stdout:kernel.core_pattern = core 2026-03-10T06:46:39.238 INFO:teuthology.orchestra.run.vm08.stdout:kernel.core_pattern = core 2026-03-10T06:46:39.261 INFO:teuthology.orchestra.run.vm09.stdout:kernel.core_pattern = core 2026-03-10T06:46:39.274 DEBUG:teuthology.orchestra.run.vm01:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-10T06:46:39.298 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T06:46:39.298 DEBUG:teuthology.orchestra.run.vm08:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-10T06:46:39.313 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T06:46:39.313 DEBUG:teuthology.orchestra.run.vm09:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-10T06:46:39.328 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T06:46:39.328 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive 2026-03-10T06:46:39.331 INFO:teuthology.task.internal:Transferring archived files... 2026-03-10T06:46:39.331 DEBUG:teuthology.misc:Transferring archived files from vm01:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/932/remote/vm01 2026-03-10T06:46:39.331 DEBUG:teuthology.orchestra.run.vm01:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-10T06:46:39.367 DEBUG:teuthology.misc:Transferring archived files from vm08:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/932/remote/vm08 2026-03-10T06:46:39.367 DEBUG:teuthology.orchestra.run.vm08:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-10T06:46:39.394 DEBUG:teuthology.misc:Transferring archived files from vm09:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/932/remote/vm09 2026-03-10T06:46:39.394 DEBUG:teuthology.orchestra.run.vm09:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-10T06:46:39.422 INFO:teuthology.task.internal:Removing archive directory... 2026-03-10T06:46:39.422 DEBUG:teuthology.orchestra.run.vm01:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-10T06:46:39.425 DEBUG:teuthology.orchestra.run.vm08:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-10T06:46:39.437 DEBUG:teuthology.orchestra.run.vm09:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-10T06:46:39.477 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive_upload 2026-03-10T06:46:39.480 INFO:teuthology.task.internal:Not uploading archives. 2026-03-10T06:46:39.480 DEBUG:teuthology.run_tasks:Unwinding manager internal.base 2026-03-10T06:46:39.482 INFO:teuthology.task.internal:Tidying up after the test... 2026-03-10T06:46:39.482 DEBUG:teuthology.orchestra.run.vm01:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-10T06:46:39.484 DEBUG:teuthology.orchestra.run.vm08:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-10T06:46:39.494 DEBUG:teuthology.orchestra.run.vm09:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-10T06:46:39.499 INFO:teuthology.orchestra.run.vm01.stdout: 8532143 0 drwxr-xr-x 2 ubuntu ubuntu 6 Mar 10 06:46 /home/ubuntu/cephtest 2026-03-10T06:46:39.512 INFO:teuthology.orchestra.run.vm08.stdout: 8532145 0 drwxr-xr-x 2 ubuntu ubuntu 6 Mar 10 06:46 /home/ubuntu/cephtest 2026-03-10T06:46:39.533 INFO:teuthology.orchestra.run.vm09.stdout: 8532138 0 drwxr-xr-x 2 ubuntu ubuntu 6 Mar 10 06:46 /home/ubuntu/cephtest 2026-03-10T06:46:39.534 DEBUG:teuthology.run_tasks:Unwinding manager console_log 2026-03-10T06:46:39.540 INFO:teuthology.run:Summary data: description: orch/cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_set_mon_crush_locations} duration: 592.8507800102234 flavor: default owner: kyr success: true 2026-03-10T06:46:39.540 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-10T06:46:39.562 INFO:teuthology.run:pass