2026-03-10T13:09:07.057 INFO:root:teuthology version: 1.2.4.dev6+g1c580df7a 2026-03-10T13:09:07.061 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-10T13:09:07.081 INFO:teuthology.run:Config: archive_path: /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1043 branch: squid description: orch/cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_iscsi_container/{centos_9.stream test_iscsi_container}} email: null first_in_suite: false flavor: default job_id: '1043' last_in_suite: false machine_type: vps name: kyr-2026-03-10_01:00:38-orch-squid-none-default-vps no_nested_subset: false os_type: centos os_version: 9.stream overrides: admin_socket: branch: squid ansible.cephlab: branch: main skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: timezone: UTC ceph: conf: global: mon election default strategy: 3 mgr: debug mgr: 20 debug ms: 1 mgr/cephadm/use_agent: true mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: debug ms: 1 debug osd: 20 osd mclock iops capacity threshold hdd: 49000 flavor: default log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) - CEPHADM_FAILED_DAEMON log-only-match: - CEPHADM_ sha1: e911bdebe5c8faa3800735d1568fcdca65db60df ceph-deploy: conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: {} install: ceph: flavor: default sha1: e911bdebe5c8faa3800735d1568fcdca65db60df extra_system_packages: deb: - python3-xmltodict - python3-jmespath rpm: - bzip2 - perl-Test-Harness - python3-xmltodict - python3-jmespath selinux: allowlist: - scontext=system_u:system_r:logrotate_t:s0 - scontext=system_u:system_r:logrotate_t:s0 - scontext=system_u:system_r:getty_t:s0 workunit: branch: tt-squid sha1: 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b owner: kyr priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - host.a - osd.0 - osd.1 - osd.2 - mon.a - mgr.a - client.0 seed: 8043 sha1: e911bdebe5c8faa3800735d1568fcdca65db60df sleep_before_teardown: 0 subset: 1/64 suite: orch suite_branch: tt-squid suite_path: /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa suite_relpath: qa suite_repo: https://github.com/kshtsk/ceph.git suite_sha1: 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b targets: vm07.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNpE1rFqRd00U0FaTv91xh5vRgqwIpwdIdVoD8SxcwW+QWSUASc8zNWizddhiRYe+OxA601VYla7DN69oyeZi0E= tasks: - pexec: all: - sudo dnf remove nvme-cli -y - sudo dnf install runc nvmetcli nvme-cli -y - sudo sed -i 's/^#runtime = "crun"/runtime = "runc"/g' /usr/share/containers/containers.conf - sudo sed -i 's/runtime = "crun"/#runtime = "crun"/g' /usr/share/containers/containers.conf - install: null - cephadm: null - cephadm.shell: host.a: - ceph osd pool create foo - rbd pool init foo - ceph orch apply iscsi foo u p - workunit: clients: client.0: - cephadm/test_iscsi_pids_limit.sh - cephadm/test_iscsi_etc_hosts.sh - cephadm/test_iscsi_setup.sh teuthology: fragments_dropped: [] meta: {} postmerge: [] teuthology_branch: clyso-debian-13 teuthology_repo: https://github.com/clyso/teuthology teuthology_sha1: 1c580df7a9c7c2aadc272da296344fd99f27c444 timestamp: 2026-03-10_01:00:38 tube: vps user: kyr verbose: false worker_log: /home/teuthos/.teuthology/dispatcher/dispatcher.vps.611473 2026-03-10T13:09:07.081 INFO:teuthology.run:suite_path is set to /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa; will attempt to use it 2026-03-10T13:09:07.081 INFO:teuthology.run:Found tasks at /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks 2026-03-10T13:09:07.081 INFO:teuthology.run_tasks:Running task internal.check_packages... 2026-03-10T13:09:07.082 INFO:teuthology.task.internal:Checking packages... 2026-03-10T13:09:07.082 INFO:teuthology.task.internal:Checking packages for os_type 'centos', flavor 'default' and ceph hash 'e911bdebe5c8faa3800735d1568fcdca65db60df' 2026-03-10T13:09:07.082 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-10T13:09:07.082 INFO:teuthology.packaging:ref: None 2026-03-10T13:09:07.082 INFO:teuthology.packaging:tag: None 2026-03-10T13:09:07.082 INFO:teuthology.packaging:branch: squid 2026-03-10T13:09:07.082 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T13:09:07.083 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&ref=squid 2026-03-10T13:09:07.819 INFO:teuthology.task.internal:Found packages for ceph version 19.2.3-678.ge911bdeb 2026-03-10T13:09:07.820 INFO:teuthology.run_tasks:Running task internal.buildpackages_prep... 2026-03-10T13:09:07.821 INFO:teuthology.task.internal:no buildpackages task found 2026-03-10T13:09:07.821 INFO:teuthology.run_tasks:Running task internal.save_config... 2026-03-10T13:09:07.821 INFO:teuthology.task.internal:Saving configuration 2026-03-10T13:09:07.825 INFO:teuthology.run_tasks:Running task internal.check_lock... 2026-03-10T13:09:07.826 INFO:teuthology.task.internal.check_lock:Checking locks... 2026-03-10T13:09:07.833 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm07.local', 'description': '/archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1043', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'centos', 'os_version': '9.stream', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-10 13:08:24.753953', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:07', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNpE1rFqRd00U0FaTv91xh5vRgqwIpwdIdVoD8SxcwW+QWSUASc8zNWizddhiRYe+OxA601VYla7DN69oyeZi0E='} 2026-03-10T13:09:07.833 INFO:teuthology.run_tasks:Running task internal.add_remotes... 2026-03-10T13:09:07.834 INFO:teuthology.task.internal:roles: ubuntu@vm07.local - ['host.a', 'osd.0', 'osd.1', 'osd.2', 'mon.a', 'mgr.a', 'client.0'] 2026-03-10T13:09:07.834 INFO:teuthology.run_tasks:Running task console_log... 2026-03-10T13:09:07.841 DEBUG:teuthology.task.console_log:vm07 does not support IPMI; excluding 2026-03-10T13:09:07.841 DEBUG:teuthology.exit:Installing handler: Handler(exiter=, func=.kill_console_loggers at 0x7fb281822170>, signals=[15]) 2026-03-10T13:09:07.841 INFO:teuthology.run_tasks:Running task internal.connect... 2026-03-10T13:09:07.842 INFO:teuthology.task.internal:Opening connections... 2026-03-10T13:09:07.842 DEBUG:teuthology.task.internal:connecting to ubuntu@vm07.local 2026-03-10T13:09:07.842 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm07.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T13:09:07.903 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-03-10T13:09:07.905 DEBUG:teuthology.orchestra.run.vm07:> uname -m 2026-03-10T13:09:08.052 INFO:teuthology.orchestra.run.vm07.stdout:x86_64 2026-03-10T13:09:08.052 DEBUG:teuthology.orchestra.run.vm07:> cat /etc/os-release 2026-03-10T13:09:08.107 INFO:teuthology.orchestra.run.vm07.stdout:NAME="CentOS Stream" 2026-03-10T13:09:08.108 INFO:teuthology.orchestra.run.vm07.stdout:VERSION="9" 2026-03-10T13:09:08.108 INFO:teuthology.orchestra.run.vm07.stdout:ID="centos" 2026-03-10T13:09:08.108 INFO:teuthology.orchestra.run.vm07.stdout:ID_LIKE="rhel fedora" 2026-03-10T13:09:08.108 INFO:teuthology.orchestra.run.vm07.stdout:VERSION_ID="9" 2026-03-10T13:09:08.108 INFO:teuthology.orchestra.run.vm07.stdout:PLATFORM_ID="platform:el9" 2026-03-10T13:09:08.108 INFO:teuthology.orchestra.run.vm07.stdout:PRETTY_NAME="CentOS Stream 9" 2026-03-10T13:09:08.108 INFO:teuthology.orchestra.run.vm07.stdout:ANSI_COLOR="0;31" 2026-03-10T13:09:08.108 INFO:teuthology.orchestra.run.vm07.stdout:LOGO="fedora-logo-icon" 2026-03-10T13:09:08.108 INFO:teuthology.orchestra.run.vm07.stdout:CPE_NAME="cpe:/o:centos:centos:9" 2026-03-10T13:09:08.108 INFO:teuthology.orchestra.run.vm07.stdout:HOME_URL="https://centos.org/" 2026-03-10T13:09:08.108 INFO:teuthology.orchestra.run.vm07.stdout:BUG_REPORT_URL="https://issues.redhat.com/" 2026-03-10T13:09:08.108 INFO:teuthology.orchestra.run.vm07.stdout:REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9" 2026-03-10T13:09:08.108 INFO:teuthology.orchestra.run.vm07.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream" 2026-03-10T13:09:08.108 INFO:teuthology.lock.ops:Updating vm07.local on lock server 2026-03-10T13:09:08.113 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-03-10T13:09:08.115 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-03-10T13:09:08.116 INFO:teuthology.task.internal:Checking for old test directory... 2026-03-10T13:09:08.116 DEBUG:teuthology.orchestra.run.vm07:> test '!' -e /home/ubuntu/cephtest 2026-03-10T13:09:08.162 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-03-10T13:09:08.163 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-03-10T13:09:08.163 DEBUG:teuthology.orchestra.run.vm07:> test -z $(ls -A /var/lib/ceph) 2026-03-10T13:09:08.217 INFO:teuthology.orchestra.run.vm07.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-10T13:09:08.217 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-03-10T13:09:08.225 DEBUG:teuthology.orchestra.run.vm07:> test -e /ceph-qa-ready 2026-03-10T13:09:08.271 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T13:09:08.464 INFO:teuthology.run_tasks:Running task internal.base... 2026-03-10T13:09:08.465 INFO:teuthology.task.internal:Creating test directory... 2026-03-10T13:09:08.465 DEBUG:teuthology.orchestra.run.vm07:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-10T13:09:08.483 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-03-10T13:09:08.485 INFO:teuthology.run_tasks:Running task internal.archive... 2026-03-10T13:09:08.486 INFO:teuthology.task.internal:Creating archive directory... 2026-03-10T13:09:08.486 DEBUG:teuthology.orchestra.run.vm07:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-10T13:09:08.544 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-03-10T13:09:08.545 INFO:teuthology.task.internal:Enabling coredump saving... 2026-03-10T13:09:08.545 DEBUG:teuthology.orchestra.run.vm07:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-10T13:09:08.600 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T13:09:08.601 DEBUG:teuthology.orchestra.run.vm07:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-10T13:09:08.669 INFO:teuthology.orchestra.run.vm07.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T13:09:08.679 INFO:teuthology.orchestra.run.vm07.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T13:09:08.680 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-03-10T13:09:08.682 INFO:teuthology.task.internal:Configuring sudo... 2026-03-10T13:09:08.682 DEBUG:teuthology.orchestra.run.vm07:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-10T13:09:08.744 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-03-10T13:09:08.746 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-03-10T13:09:08.746 DEBUG:teuthology.orchestra.run.vm07:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-10T13:09:08.799 DEBUG:teuthology.orchestra.run.vm07:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T13:09:08.865 DEBUG:teuthology.orchestra.run.vm07:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T13:09:08.922 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-10T13:09:08.922 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-10T13:09:08.982 DEBUG:teuthology.orchestra.run.vm07:> sudo service rsyslog restart 2026-03-10T13:09:09.049 INFO:teuthology.orchestra.run.vm07.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T13:09:09.506 INFO:teuthology.run_tasks:Running task internal.timer... 2026-03-10T13:09:09.507 INFO:teuthology.task.internal:Starting timer... 2026-03-10T13:09:09.507 INFO:teuthology.run_tasks:Running task pcp... 2026-03-10T13:09:09.545 INFO:teuthology.run_tasks:Running task selinux... 2026-03-10T13:09:09.547 DEBUG:teuthology.task:Applying overrides for task selinux: {'allowlist': ['scontext=system_u:system_r:logrotate_t:s0', 'scontext=system_u:system_r:logrotate_t:s0', 'scontext=system_u:system_r:getty_t:s0']} 2026-03-10T13:09:09.547 INFO:teuthology.task.selinux:Excluding vm07: VMs are not yet supported 2026-03-10T13:09:09.547 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-03-10T13:09:09.547 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-03-10T13:09:09.547 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-03-10T13:09:09.547 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-03-10T13:09:09.548 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'timezone': 'UTC'}} 2026-03-10T13:09:09.549 DEBUG:teuthology.repo_utils:Setting repo remote to https://github.com/ceph/ceph-cm-ansible.git 2026-03-10T13:09:09.550 INFO:teuthology.repo_utils:Fetching github.com_ceph_ceph-cm-ansible_main from origin 2026-03-10T13:09:10.206 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main to origin/main 2026-03-10T13:09:10.211 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-03-10T13:09:10.212 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "timezone": "UTC"}' -i /tmp/teuth_ansible_inventorykcmau9tu --limit vm07.local /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-03-10T13:10:57.233 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm07.local')] 2026-03-10T13:10:57.233 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm07.local' 2026-03-10T13:10:57.234 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm07.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T13:10:57.301 DEBUG:teuthology.orchestra.run.vm07:> true 2026-03-10T13:10:57.372 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm07.local' 2026-03-10T13:10:57.373 INFO:teuthology.run_tasks:Running task clock... 2026-03-10T13:10:57.375 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-03-10T13:10:57.375 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-10T13:10:57.375 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T13:10:57.446 INFO:teuthology.orchestra.run.vm07.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-03-10T13:10:57.462 INFO:teuthology.orchestra.run.vm07.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-03-10T13:10:57.497 INFO:teuthology.orchestra.run.vm07.stderr:sudo: ntpd: command not found 2026-03-10T13:10:57.508 INFO:teuthology.orchestra.run.vm07.stdout:506 Cannot talk to daemon 2026-03-10T13:10:57.523 INFO:teuthology.orchestra.run.vm07.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-03-10T13:10:57.536 INFO:teuthology.orchestra.run.vm07.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-03-10T13:10:57.578 INFO:teuthology.orchestra.run.vm07.stderr:bash: line 1: ntpq: command not found 2026-03-10T13:10:57.580 INFO:teuthology.orchestra.run.vm07.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T13:10:57.580 INFO:teuthology.orchestra.run.vm07.stdout:=============================================================================== 2026-03-10T13:10:57.580 INFO:teuthology.run_tasks:Running task pexec... 2026-03-10T13:10:57.583 INFO:teuthology.task.pexec:Executing custom commands... 2026-03-10T13:10:57.583 DEBUG:teuthology.orchestra.run.vm07:> TESTDIR=/home/ubuntu/cephtest bash -s 2026-03-10T13:10:57.623 DEBUG:teuthology.task.pexec:ubuntu@vm07.local< sudo dnf remove nvme-cli -y 2026-03-10T13:10:57.623 DEBUG:teuthology.task.pexec:ubuntu@vm07.local< sudo dnf install runc nvmetcli nvme-cli -y 2026-03-10T13:10:57.623 DEBUG:teuthology.task.pexec:ubuntu@vm07.local< sudo sed -i 's/^#runtime = "crun"/runtime = "runc"/g' /usr/share/containers/containers.conf 2026-03-10T13:10:57.623 DEBUG:teuthology.task.pexec:ubuntu@vm07.local< sudo sed -i 's/runtime = "crun"/#runtime = "crun"/g' /usr/share/containers/containers.conf 2026-03-10T13:10:57.623 INFO:teuthology.task.pexec:Running commands on host ubuntu@vm07.local 2026-03-10T13:10:57.623 INFO:teuthology.task.pexec:sudo dnf remove nvme-cli -y 2026-03-10T13:10:57.624 INFO:teuthology.task.pexec:sudo dnf install runc nvmetcli nvme-cli -y 2026-03-10T13:10:57.624 INFO:teuthology.task.pexec:sudo sed -i 's/^#runtime = "crun"/runtime = "runc"/g' /usr/share/containers/containers.conf 2026-03-10T13:10:57.624 INFO:teuthology.task.pexec:sudo sed -i 's/runtime = "crun"/#runtime = "crun"/g' /usr/share/containers/containers.conf 2026-03-10T13:10:57.847 INFO:teuthology.orchestra.run.vm07.stdout:No match for argument: nvme-cli 2026-03-10T13:10:57.847 INFO:teuthology.orchestra.run.vm07.stderr:No packages marked for removal. 2026-03-10T13:10:57.850 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-10T13:10:57.850 INFO:teuthology.orchestra.run.vm07.stdout:Nothing to do. 2026-03-10T13:10:57.851 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-10T13:10:58.327 INFO:teuthology.orchestra.run.vm07.stdout:Last metadata expiration check: 0:01:04 ago on Tue 10 Mar 2026 01:09:54 PM UTC. 2026-03-10T13:10:58.450 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-10T13:10:58.450 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-10T13:10:58.450 INFO:teuthology.orchestra.run.vm07.stdout: Package Arch Version Repository Size 2026-03-10T13:10:58.450 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-10T13:10:58.450 INFO:teuthology.orchestra.run.vm07.stdout:Installing: 2026-03-10T13:10:58.450 INFO:teuthology.orchestra.run.vm07.stdout: nvme-cli x86_64 2.16-1.el9 baseos 1.2 M 2026-03-10T13:10:58.450 INFO:teuthology.orchestra.run.vm07.stdout: nvmetcli noarch 0.8-3.el9 baseos 44 k 2026-03-10T13:10:58.450 INFO:teuthology.orchestra.run.vm07.stdout: runc x86_64 4:1.4.0-2.el9 appstream 4.0 M 2026-03-10T13:10:58.450 INFO:teuthology.orchestra.run.vm07.stdout:Installing dependencies: 2026-03-10T13:10:58.450 INFO:teuthology.orchestra.run.vm07.stdout: python3-configshell noarch 1:1.1.30-1.el9 baseos 72 k 2026-03-10T13:10:58.450 INFO:teuthology.orchestra.run.vm07.stdout: python3-kmod x86_64 0.9-32.el9 baseos 84 k 2026-03-10T13:10:58.450 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyparsing noarch 2.4.7-9.el9 baseos 150 k 2026-03-10T13:10:58.450 INFO:teuthology.orchestra.run.vm07.stdout: python3-urwid x86_64 2.1.2-4.el9 baseos 837 k 2026-03-10T13:10:58.450 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:10:58.450 INFO:teuthology.orchestra.run.vm07.stdout:Transaction Summary 2026-03-10T13:10:58.450 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-10T13:10:58.450 INFO:teuthology.orchestra.run.vm07.stdout:Install 7 Packages 2026-03-10T13:10:58.450 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:10:58.451 INFO:teuthology.orchestra.run.vm07.stdout:Total download size: 6.3 M 2026-03-10T13:10:58.451 INFO:teuthology.orchestra.run.vm07.stdout:Installed size: 24 M 2026-03-10T13:10:58.451 INFO:teuthology.orchestra.run.vm07.stdout:Downloading Packages: 2026-03-10T13:10:58.943 INFO:teuthology.orchestra.run.vm07.stdout:(1/7): python3-configshell-1.1.30-1.el9.noarch. 252 kB/s | 72 kB 00:00 2026-03-10T13:10:58.943 INFO:teuthology.orchestra.run.vm07.stdout:(2/7): nvmetcli-0.8-3.el9.noarch.rpm 153 kB/s | 44 kB 00:00 2026-03-10T13:10:59.044 INFO:teuthology.orchestra.run.vm07.stdout:(3/7): python3-kmod-0.9-32.el9.x86_64.rpm 828 kB/s | 84 kB 00:00 2026-03-10T13:10:59.046 INFO:teuthology.orchestra.run.vm07.stdout:(4/7): python3-pyparsing-2.4.7-9.el9.noarch.rpm 1.4 MB/s | 150 kB 00:00 2026-03-10T13:10:59.126 INFO:teuthology.orchestra.run.vm07.stdout:(5/7): nvme-cli-2.16-1.el9.x86_64.rpm 2.5 MB/s | 1.2 MB 00:00 2026-03-10T13:10:59.201 INFO:teuthology.orchestra.run.vm07.stdout:(6/7): python3-urwid-2.1.2-4.el9.x86_64.rpm 5.3 MB/s | 837 kB 00:00 2026-03-10T13:10:59.211 INFO:teuthology.orchestra.run.vm07.stdout:(7/7): runc-1.4.0-2.el9.x86_64.rpm 24 MB/s | 4.0 MB 00:00 2026-03-10T13:10:59.211 INFO:teuthology.orchestra.run.vm07.stdout:-------------------------------------------------------------------------------- 2026-03-10T13:10:59.211 INFO:teuthology.orchestra.run.vm07.stdout:Total 8.3 MB/s | 6.3 MB 00:00 2026-03-10T13:10:59.305 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction check 2026-03-10T13:10:59.319 INFO:teuthology.orchestra.run.vm07.stdout:Transaction check succeeded. 2026-03-10T13:10:59.319 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction test 2026-03-10T13:10:59.394 INFO:teuthology.orchestra.run.vm07.stdout:Transaction test succeeded. 2026-03-10T13:10:59.394 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction 2026-03-10T13:10:59.598 INFO:teuthology.orchestra.run.vm07.stdout: Preparing : 1/1 2026-03-10T13:10:59.614 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-urwid-2.1.2-4.el9.x86_64 1/7 2026-03-10T13:10:59.627 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-pyparsing-2.4.7-9.el9.noarch 2/7 2026-03-10T13:10:59.637 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-configshell-1:1.1.30-1.el9.noarch 3/7 2026-03-10T13:10:59.645 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-kmod-0.9-32.el9.x86_64 4/7 2026-03-10T13:10:59.646 INFO:teuthology.orchestra.run.vm07.stdout: Installing : nvmetcli-0.8-3.el9.noarch 5/7 2026-03-10T13:10:59.709 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: nvmetcli-0.8-3.el9.noarch 5/7 2026-03-10T13:10:59.877 INFO:teuthology.orchestra.run.vm07.stdout: Installing : runc-4:1.4.0-2.el9.x86_64 6/7 2026-03-10T13:10:59.882 INFO:teuthology.orchestra.run.vm07.stdout: Installing : nvme-cli-2.16-1.el9.x86_64 7/7 2026-03-10T13:11:00.271 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: nvme-cli-2.16-1.el9.x86_64 7/7 2026-03-10T13:11:00.271 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /usr/lib/systemd/system/nvmefc-boot-connections.service. 2026-03-10T13:11:00.271 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:11:00.872 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : nvme-cli-2.16-1.el9.x86_64 1/7 2026-03-10T13:11:00.872 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : nvmetcli-0.8-3.el9.noarch 2/7 2026-03-10T13:11:00.872 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-configshell-1:1.1.30-1.el9.noarch 3/7 2026-03-10T13:11:00.872 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-kmod-0.9-32.el9.x86_64 4/7 2026-03-10T13:11:00.873 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.noarch 5/7 2026-03-10T13:11:00.873 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-urwid-2.1.2-4.el9.x86_64 6/7 2026-03-10T13:11:00.983 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : runc-4:1.4.0-2.el9.x86_64 7/7 2026-03-10T13:11:00.983 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:11:00.983 INFO:teuthology.orchestra.run.vm07.stdout:Installed: 2026-03-10T13:11:00.983 INFO:teuthology.orchestra.run.vm07.stdout: nvme-cli-2.16-1.el9.x86_64 nvmetcli-0.8-3.el9.noarch 2026-03-10T13:11:00.983 INFO:teuthology.orchestra.run.vm07.stdout: python3-configshell-1:1.1.30-1.el9.noarch python3-kmod-0.9-32.el9.x86_64 2026-03-10T13:11:00.983 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyparsing-2.4.7-9.el9.noarch python3-urwid-2.1.2-4.el9.x86_64 2026-03-10T13:11:00.983 INFO:teuthology.orchestra.run.vm07.stdout: runc-4:1.4.0-2.el9.x86_64 2026-03-10T13:11:00.983 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:11:00.983 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-10T13:11:01.104 DEBUG:teuthology.parallel:result is None 2026-03-10T13:11:01.104 INFO:teuthology.run_tasks:Running task install... 2026-03-10T13:11:01.106 DEBUG:teuthology.task.install:project ceph 2026-03-10T13:11:01.106 DEBUG:teuthology.task.install:INSTALL overrides: {'ceph': {'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'}, 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-10T13:11:01.106 DEBUG:teuthology.task.install:config {'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-10T13:11:01.106 INFO:teuthology.task.install:Using flavor: default 2026-03-10T13:11:01.108 DEBUG:teuthology.task.install:Package list is: {'deb': ['ceph', 'cephadm', 'ceph-mds', 'ceph-mgr', 'ceph-common', 'ceph-fuse', 'ceph-test', 'ceph-volume', 'radosgw', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'libcephfs2', 'libcephfs-dev', 'librados2', 'librbd1', 'rbd-fuse'], 'rpm': ['ceph-radosgw', 'ceph-test', 'ceph', 'ceph-base', 'cephadm', 'ceph-immutable-object-cache', 'ceph-mgr', 'ceph-mgr-dashboard', 'ceph-mgr-diskprediction-local', 'ceph-mgr-rook', 'ceph-mgr-cephadm', 'ceph-fuse', 'ceph-volume', 'librados-devel', 'libcephfs2', 'libcephfs-devel', 'librados2', 'librbd1', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'rbd-fuse', 'rbd-mirror', 'rbd-nbd']} 2026-03-10T13:11:01.108 INFO:teuthology.task.install:extra packages: [] 2026-03-10T13:11:01.108 DEBUG:teuthology.task.install.rpm:_update_package_list_and_install: config is {'branch': None, 'cleanup': None, 'debuginfo': None, 'downgrade_packages': [], 'exclude_packages': [], 'extra_packages': [], 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}, 'extras': None, 'enable_coprs': [], 'flavor': 'default', 'install_ceph_packages': True, 'packages': {}, 'project': 'ceph', 'repos_only': False, 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'tag': None, 'wait_for_package': False} 2026-03-10T13:11:01.109 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T13:11:01.705 INFO:teuthology.task.install.rpm:Pulling from https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/ 2026-03-10T13:11:01.705 INFO:teuthology.task.install.rpm:Package version is 19.2.3-678.ge911bdeb 2026-03-10T13:11:02.266 INFO:teuthology.packaging:Writing yum repo: [ceph] name=ceph packages for $basearch baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/$basearch enabled=1 gpgcheck=0 type=rpm-md [ceph-noarch] name=ceph noarch packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/noarch enabled=1 gpgcheck=0 type=rpm-md [ceph-source] name=ceph source packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/SRPMS enabled=1 gpgcheck=0 type=rpm-md 2026-03-10T13:11:02.266 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-10T13:11:02.266 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/etc/yum.repos.d/ceph.repo 2026-03-10T13:11:02.298 INFO:teuthology.task.install.rpm:Installing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd, bzip2, perl-Test-Harness, python3-xmltodict, python3-jmespath on remote rpm x86_64 2026-03-10T13:11:02.298 DEBUG:teuthology.orchestra.run.vm07:> if test -f /etc/yum.repos.d/ceph.repo ; then sudo sed -i -e ':a;N;$!ba;s/enabled=1\ngpg/enabled=1\npriority=1\ngpg/g' -e 's;ref/[a-zA-Z0-9_-]*/;sha1/e911bdebe5c8faa3800735d1568fcdca65db60df/;g' /etc/yum.repos.d/ceph.repo ; fi 2026-03-10T13:11:02.371 DEBUG:teuthology.orchestra.run.vm07:> sudo touch -a /etc/yum/pluginconf.d/priorities.conf ; test -e /etc/yum/pluginconf.d/priorities.conf.orig || sudo cp -af /etc/yum/pluginconf.d/priorities.conf /etc/yum/pluginconf.d/priorities.conf.orig 2026-03-10T13:11:02.452 DEBUG:teuthology.orchestra.run.vm07:> grep check_obsoletes /etc/yum/pluginconf.d/priorities.conf && sudo sed -i 's/check_obsoletes.*0/check_obsoletes = 1/g' /etc/yum/pluginconf.d/priorities.conf || echo 'check_obsoletes = 1' | sudo tee -a /etc/yum/pluginconf.d/priorities.conf 2026-03-10T13:11:02.518 INFO:teuthology.orchestra.run.vm07.stdout:check_obsoletes = 1 2026-03-10T13:11:02.519 DEBUG:teuthology.orchestra.run.vm07:> sudo yum clean all 2026-03-10T13:11:02.729 INFO:teuthology.orchestra.run.vm07.stdout:41 files removed 2026-03-10T13:11:02.762 DEBUG:teuthology.orchestra.run.vm07:> sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd bzip2 perl-Test-Harness python3-xmltodict python3-jmespath 2026-03-10T13:11:04.180 INFO:teuthology.orchestra.run.vm07.stdout:ceph packages for x86_64 69 kB/s | 84 kB 00:01 2026-03-10T13:11:05.164 INFO:teuthology.orchestra.run.vm07.stdout:ceph noarch packages 12 kB/s | 12 kB 00:00 2026-03-10T13:11:06.130 INFO:teuthology.orchestra.run.vm07.stdout:ceph source packages 2.0 kB/s | 1.9 kB 00:00 2026-03-10T13:11:07.138 INFO:teuthology.orchestra.run.vm07.stdout:CentOS Stream 9 - BaseOS 9.1 MB/s | 8.9 MB 00:00 2026-03-10T13:11:08.774 INFO:teuthology.orchestra.run.vm07.stdout:CentOS Stream 9 - AppStream 32 MB/s | 27 MB 00:00 2026-03-10T13:11:12.681 INFO:teuthology.orchestra.run.vm07.stdout:CentOS Stream 9 - CRB 14 MB/s | 8.0 MB 00:00 2026-03-10T13:11:14.386 INFO:teuthology.orchestra.run.vm07.stdout:CentOS Stream 9 - Extras packages 29 kB/s | 20 kB 00:00 2026-03-10T13:11:15.298 INFO:teuthology.orchestra.run.vm07.stdout:Extra Packages for Enterprise Linux 25 MB/s | 20 MB 00:00 2026-03-10T13:11:20.189 INFO:teuthology.orchestra.run.vm07.stdout:lab-extras 64 kB/s | 50 kB 00:00 2026-03-10T13:11:21.519 INFO:teuthology.orchestra.run.vm07.stdout:Package librados2-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-10T13:11:21.520 INFO:teuthology.orchestra.run.vm07.stdout:Package librbd1-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-10T13:11:21.523 INFO:teuthology.orchestra.run.vm07.stdout:Package bzip2-1.0.8-11.el9.x86_64 is already installed. 2026-03-10T13:11:21.524 INFO:teuthology.orchestra.run.vm07.stdout:Package perl-Test-Harness-1:3.42-461.el9.noarch is already installed. 2026-03-10T13:11:21.551 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout:====================================================================================== 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout: Package Arch Version Repository Size 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout:====================================================================================== 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout:Installing: 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 6.5 k 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.5 M 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.2 M 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 145 k 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.1 M 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 150 k 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 3.8 M 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 7.4 M 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 49 k 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 11 M 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 50 M 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 299 k 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 769 k 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 34 k 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.0 M 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 127 k 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 165 k 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout: python3-jmespath noarch 1.0.1-1.el9 appstream 48 k 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 323 k 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 303 k 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 100 k 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout: python3-xmltodict noarch 0.12.0-15.el9 epel 22 k 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 85 k 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.1 M 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 171 k 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout:Upgrading: 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.4 M 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.2 M 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout:Installing dependencies: 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout: abseil-cpp x86_64 20211102.0-4.el9 epel 551 k 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout: boost-program-options x86_64 1.75.0-13.el9 appstream 104 k 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 22 M 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 31 k 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 2.4 M 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 253 k 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 4.7 M 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 17 M 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 17 k 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 25 k 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout: cryptsetup x86_64 2.8.1-3.el9 baseos 351 k 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout: flexiblas x86_64 3.0.4-9.el9 appstream 30 k 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 appstream 3.0 M 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 appstream 15 k 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout: gperftools-libs x86_64 2.9.1-3.el9 epel 308 k 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout: grpc-data noarch 1.46.7-10.el9 epel 19 k 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout: ledmon-libs x86_64 1.1.0-3.el9 baseos 40 k 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout: libarrow x86_64 9.0.0-15.el9 epel 4.4 M 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout: libarrow-doc noarch 9.0.0-15.el9 epel 25 k 2026-03-10T13:11:21.555 INFO:teuthology.orchestra.run.vm07.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 163 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: libconfig x86_64 1.7.2-9.el9 baseos 72 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: libgfortran x86_64 11.5.0-14.el9 baseos 794 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: libnbd x86_64 1.20.3-4.el9 appstream 164 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: liboath x86_64 2.6.12-1.el9 epel 49 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: libpmemobj x86_64 1.12.1-1.el9 appstream 160 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: libquadmath x86_64 11.5.0-14.el9 baseos 184 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: librabbitmq x86_64 0.11.0-7.el9 appstream 45 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 503 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: librdkafka x86_64 1.6.1-102.el9 appstream 662 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.4 M 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: libstoragemgmt x86_64 1.10.1-1.el9 appstream 246 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: libunwind x86_64 1.6.2-1.el9 epel 67 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: libxslt x86_64 1.1.34-12.el9 appstream 233 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: lttng-ust x86_64 2.12.0-6.el9 appstream 292 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: lua x86_64 5.4.4-4.el9 appstream 188 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: lua-devel x86_64 5.4.4-4.el9 crb 22 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: luarocks noarch 3.9.2-5.el9 epel 151 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: mailcap noarch 2.1.49-5.el9 baseos 33 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: openblas x86_64 0.3.29-1.el9 appstream 42 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: openblas-openmp x86_64 0.3.29-1.el9 appstream 5.3 M 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: parquet-libs x86_64 9.0.0-15.el9 epel 838 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: pciutils x86_64 3.7.0-7.el9 baseos 93 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: protobuf x86_64 3.14.0-17.el9 appstream 1.0 M 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: protobuf-compiler x86_64 3.14.0-17.el9 crb 862 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: python3-asyncssh noarch 2.13.2-5.el9 epel 548 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: python3-autocommand noarch 2.2.2-8.el9 epel 29 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: python3-babel noarch 2.9.1-2.el9 appstream 6.0 M 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 epel 60 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: python3-bcrypt x86_64 3.2.2-1.el9 epel 43 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: python3-cachetools noarch 4.2.4-1.el9 epel 32 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 45 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 142 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: python3-certifi noarch 2023.05.07-4.el9 epel 14 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: python3-cffi x86_64 1.14.5-5.el9 baseos 253 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: python3-cheroot noarch 10.0.1-4.el9 epel 173 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy noarch 18.6.1-2.el9 epel 358 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: python3-cryptography x86_64 36.0.1-5.el9 baseos 1.2 M 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: python3-devel x86_64 3.9.25-3.el9 appstream 244 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: python3-google-auth noarch 1:2.45.0-1.el9 epel 254 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: python3-grpcio x86_64 1.46.7-10.el9 epel 2.0 M 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 epel 144 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco noarch 8.2.1-3.el9 epel 11 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 epel 18 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 epel 23 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-context noarch 6.0.1-3.el9 epel 20 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 epel 19 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-text noarch 4.0.0-2.el9 epel 26 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: python3-jinja2 noarch 2.11.3-8.el9 appstream 249 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 epel 1.0 M 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 appstream 177 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: python3-logutils noarch 0.3.5-21.el9 epel 46 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: python3-mako noarch 1.1.4-6.el9 appstream 172 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: python3-markupsafe x86_64 1.1.1-12.el9 appstream 35 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: python3-more-itertools noarch 8.12.0-2.el9 epel 79 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: python3-natsort noarch 7.1.1-5.el9 epel 58 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: python3-numpy x86_64 1:1.23.5-2.el9 appstream 6.1 M 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 appstream 442 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: python3-packaging noarch 20.9-5.el9 appstream 77 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: python3-pecan noarch 1.4.2-3.el9 epel 272 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: python3-ply noarch 3.11-14.el9 baseos 106 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: python3-portend noarch 3.1.0-2.el9 epel 16 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: python3-protobuf noarch 3.14.0-17.el9 appstream 267 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 epel 90 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyasn1 noarch 0.4.8-7.el9 appstream 157 k 2026-03-10T13:11:21.556 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 appstream 277 k 2026-03-10T13:11:21.557 INFO:teuthology.orchestra.run.vm07.stdout: python3-pycparser noarch 2.20-6.el9 baseos 135 k 2026-03-10T13:11:21.557 INFO:teuthology.orchestra.run.vm07.stdout: python3-repoze-lru noarch 0.7-16.el9 epel 31 k 2026-03-10T13:11:21.557 INFO:teuthology.orchestra.run.vm07.stdout: python3-requests noarch 2.25.1-10.el9 baseos 126 k 2026-03-10T13:11:21.557 INFO:teuthology.orchestra.run.vm07.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 appstream 54 k 2026-03-10T13:11:21.557 INFO:teuthology.orchestra.run.vm07.stdout: python3-routes noarch 2.5.1-5.el9 epel 188 k 2026-03-10T13:11:21.557 INFO:teuthology.orchestra.run.vm07.stdout: python3-rsa noarch 4.9-2.el9 epel 59 k 2026-03-10T13:11:21.557 INFO:teuthology.orchestra.run.vm07.stdout: python3-scipy x86_64 1.9.3-2.el9 appstream 19 M 2026-03-10T13:11:21.557 INFO:teuthology.orchestra.run.vm07.stdout: python3-tempora noarch 5.0.0-2.el9 epel 36 k 2026-03-10T13:11:21.557 INFO:teuthology.orchestra.run.vm07.stdout: python3-toml noarch 0.10.2-6.el9 appstream 42 k 2026-03-10T13:11:21.557 INFO:teuthology.orchestra.run.vm07.stdout: python3-typing-extensions noarch 4.15.0-1.el9 epel 86 k 2026-03-10T13:11:21.557 INFO:teuthology.orchestra.run.vm07.stdout: python3-urllib3 noarch 1.26.5-7.el9 baseos 218 k 2026-03-10T13:11:21.557 INFO:teuthology.orchestra.run.vm07.stdout: python3-webob noarch 1.8.8-2.el9 epel 230 k 2026-03-10T13:11:21.557 INFO:teuthology.orchestra.run.vm07.stdout: python3-websocket-client noarch 1.2.3-2.el9 epel 90 k 2026-03-10T13:11:21.557 INFO:teuthology.orchestra.run.vm07.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 epel 427 k 2026-03-10T13:11:21.557 INFO:teuthology.orchestra.run.vm07.stdout: python3-zc-lockfile noarch 2.0-10.el9 epel 20 k 2026-03-10T13:11:21.557 INFO:teuthology.orchestra.run.vm07.stdout: qatlib x86_64 25.08.0-2.el9 appstream 240 k 2026-03-10T13:11:21.557 INFO:teuthology.orchestra.run.vm07.stdout: qatzip-libs x86_64 1.3.1-1.el9 appstream 66 k 2026-03-10T13:11:21.557 INFO:teuthology.orchestra.run.vm07.stdout: re2 x86_64 1:20211101-20.el9 epel 191 k 2026-03-10T13:11:21.557 INFO:teuthology.orchestra.run.vm07.stdout: socat x86_64 1.7.4.1-8.el9 appstream 303 k 2026-03-10T13:11:21.557 INFO:teuthology.orchestra.run.vm07.stdout: thrift x86_64 0.15.0-4.el9 epel 1.6 M 2026-03-10T13:11:21.557 INFO:teuthology.orchestra.run.vm07.stdout: unzip x86_64 6.0-59.el9 baseos 182 k 2026-03-10T13:11:21.557 INFO:teuthology.orchestra.run.vm07.stdout: xmlstarlet x86_64 1.6.1-20.el9 appstream 64 k 2026-03-10T13:11:21.557 INFO:teuthology.orchestra.run.vm07.stdout: zip x86_64 3.0-35.el9 baseos 266 k 2026-03-10T13:11:21.557 INFO:teuthology.orchestra.run.vm07.stdout:Installing weak dependencies: 2026-03-10T13:11:21.557 INFO:teuthology.orchestra.run.vm07.stdout: qatlib-service x86_64 25.08.0-2.el9 appstream 37 k 2026-03-10T13:11:21.557 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:11:21.557 INFO:teuthology.orchestra.run.vm07.stdout:Transaction Summary 2026-03-10T13:11:21.557 INFO:teuthology.orchestra.run.vm07.stdout:====================================================================================== 2026-03-10T13:11:21.557 INFO:teuthology.orchestra.run.vm07.stdout:Install 134 Packages 2026-03-10T13:11:21.557 INFO:teuthology.orchestra.run.vm07.stdout:Upgrade 2 Packages 2026-03-10T13:11:21.557 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:11:21.557 INFO:teuthology.orchestra.run.vm07.stdout:Total download size: 210 M 2026-03-10T13:11:21.557 INFO:teuthology.orchestra.run.vm07.stdout:Downloading Packages: 2026-03-10T13:11:22.946 INFO:teuthology.orchestra.run.vm07.stdout:(1/136): ceph-19.2.3-678.ge911bdeb.el9.x86_64.r 13 kB/s | 6.5 kB 00:00 2026-03-10T13:11:23.924 INFO:teuthology.orchestra.run.vm07.stdout:(2/136): ceph-fuse-19.2.3-678.ge911bdeb.el9.x86 1.2 MB/s | 1.2 MB 00:00 2026-03-10T13:11:24.048 INFO:teuthology.orchestra.run.vm07.stdout:(3/136): ceph-immutable-object-cache-19.2.3-678 1.1 MB/s | 145 kB 00:00 2026-03-10T13:11:24.912 INFO:teuthology.orchestra.run.vm07.stdout:(4/136): ceph-base-19.2.3-678.ge911bdeb.el9.x86 2.2 MB/s | 5.5 MB 00:02 2026-03-10T13:11:24.921 INFO:teuthology.orchestra.run.vm07.stdout:(5/136): ceph-mds-19.2.3-678.ge911bdeb.el9.x86_ 2.8 MB/s | 2.4 MB 00:00 2026-03-10T13:11:25.154 INFO:teuthology.orchestra.run.vm07.stdout:(6/136): ceph-mgr-19.2.3-678.ge911bdeb.el9.x86_ 4.5 MB/s | 1.1 MB 00:00 2026-03-10T13:11:26.034 INFO:teuthology.orchestra.run.vm07.stdout:(7/136): ceph-mon-19.2.3-678.ge911bdeb.el9.x86_ 4.3 MB/s | 4.7 MB 00:01 2026-03-10T13:11:27.169 INFO:teuthology.orchestra.run.vm07.stdout:(8/136): ceph-common-19.2.3-678.ge911bdeb.el9.x 4.6 MB/s | 22 MB 00:04 2026-03-10T13:11:27.345 INFO:teuthology.orchestra.run.vm07.stdout:(9/136): ceph-osd-19.2.3-678.ge911bdeb.el9.x86_ 7.8 MB/s | 17 MB 00:02 2026-03-10T13:11:27.346 INFO:teuthology.orchestra.run.vm07.stdout:(10/136): ceph-selinux-19.2.3-678.ge911bdeb.el9 141 kB/s | 25 kB 00:00 2026-03-10T13:11:27.493 INFO:teuthology.orchestra.run.vm07.stdout:(11/136): libcephfs-devel-19.2.3-678.ge911bdeb. 230 kB/s | 34 kB 00:00 2026-03-10T13:11:27.621 INFO:teuthology.orchestra.run.vm07.stdout:(12/136): ceph-radosgw-19.2.3-678.ge911bdeb.el9 6.8 MB/s | 11 MB 00:01 2026-03-10T13:11:27.627 INFO:teuthology.orchestra.run.vm07.stdout:(13/136): libcephfs2-19.2.3-678.ge911bdeb.el9.x 7.3 MB/s | 1.0 MB 00:00 2026-03-10T13:11:27.757 INFO:teuthology.orchestra.run.vm07.stdout:(14/136): librados-devel-19.2.3-678.ge911bdeb.e 975 kB/s | 127 kB 00:00 2026-03-10T13:11:27.759 INFO:teuthology.orchestra.run.vm07.stdout:(15/136): libcephsqlite-19.2.3-678.ge911bdeb.el 1.2 MB/s | 163 kB 00:00 2026-03-10T13:11:27.887 INFO:teuthology.orchestra.run.vm07.stdout:(16/136): libradosstriper1-19.2.3-678.ge911bdeb 3.8 MB/s | 503 kB 00:00 2026-03-10T13:11:28.013 INFO:teuthology.orchestra.run.vm07.stdout:(17/136): python3-ceph-argparse-19.2.3-678.ge91 358 kB/s | 45 kB 00:00 2026-03-10T13:11:28.139 INFO:teuthology.orchestra.run.vm07.stdout:(18/136): python3-ceph-common-19.2.3-678.ge911b 1.1 MB/s | 142 kB 00:00 2026-03-10T13:11:28.268 INFO:teuthology.orchestra.run.vm07.stdout:(19/136): librgw2-19.2.3-678.ge911bdeb.el9.x86_ 11 MB/s | 5.4 MB 00:00 2026-03-10T13:11:28.271 INFO:teuthology.orchestra.run.vm07.stdout:(20/136): python3-cephfs-19.2.3-678.ge911bdeb.e 1.2 MB/s | 165 kB 00:00 2026-03-10T13:11:28.392 INFO:teuthology.orchestra.run.vm07.stdout:(21/136): python3-rados-19.2.3-678.ge911bdeb.el 2.6 MB/s | 323 kB 00:00 2026-03-10T13:11:28.397 INFO:teuthology.orchestra.run.vm07.stdout:(22/136): python3-rbd-19.2.3-678.ge911bdeb.el9. 2.4 MB/s | 303 kB 00:00 2026-03-10T13:11:28.514 INFO:teuthology.orchestra.run.vm07.stdout:(23/136): python3-rgw-19.2.3-678.ge911bdeb.el9. 822 kB/s | 100 kB 00:00 2026-03-10T13:11:28.520 INFO:teuthology.orchestra.run.vm07.stdout:(24/136): rbd-fuse-19.2.3-678.ge911bdeb.el9.x86 689 kB/s | 85 kB 00:00 2026-03-10T13:11:28.661 INFO:teuthology.orchestra.run.vm07.stdout:(25/136): rbd-nbd-19.2.3-678.ge911bdeb.el9.x86_ 1.2 MB/s | 171 kB 00:00 2026-03-10T13:11:28.780 INFO:teuthology.orchestra.run.vm07.stdout:(26/136): rbd-mirror-19.2.3-678.ge911bdeb.el9.x 12 MB/s | 3.1 MB 00:00 2026-03-10T13:11:28.784 INFO:teuthology.orchestra.run.vm07.stdout:(27/136): ceph-grafana-dashboards-19.2.3-678.ge 253 kB/s | 31 kB 00:00 2026-03-10T13:11:28.902 INFO:teuthology.orchestra.run.vm07.stdout:(28/136): ceph-mgr-cephadm-19.2.3-678.ge911bdeb 1.2 MB/s | 150 kB 00:00 2026-03-10T13:11:29.216 INFO:teuthology.orchestra.run.vm07.stdout:(29/136): ceph-mgr-dashboard-19.2.3-678.ge911bd 8.8 MB/s | 3.8 MB 00:00 2026-03-10T13:11:29.342 INFO:teuthology.orchestra.run.vm07.stdout:(30/136): ceph-mgr-modules-core-19.2.3-678.ge91 2.0 MB/s | 253 kB 00:00 2026-03-10T13:11:29.424 INFO:teuthology.orchestra.run.vm07.stdout:(31/136): ceph-mgr-diskprediction-local-19.2.3- 14 MB/s | 7.4 MB 00:00 2026-03-10T13:11:29.466 INFO:teuthology.orchestra.run.vm07.stdout:(32/136): ceph-mgr-rook-19.2.3-678.ge911bdeb.el 400 kB/s | 49 kB 00:00 2026-03-10T13:11:29.547 INFO:teuthology.orchestra.run.vm07.stdout:(33/136): ceph-prometheus-alerts-19.2.3-678.ge9 137 kB/s | 17 kB 00:00 2026-03-10T13:11:29.594 INFO:teuthology.orchestra.run.vm07.stdout:(34/136): ceph-volume-19.2.3-678.ge911bdeb.el9. 2.3 MB/s | 299 kB 00:00 2026-03-10T13:11:29.679 INFO:teuthology.orchestra.run.vm07.stdout:(35/136): cephadm-19.2.3-678.ge911bdeb.el9.noar 5.7 MB/s | 769 kB 00:00 2026-03-10T13:11:29.763 INFO:teuthology.orchestra.run.vm07.stdout:(36/136): cryptsetup-2.8.1-3.el9.x86_64.rpm 2.0 MB/s | 351 kB 00:00 2026-03-10T13:11:29.763 INFO:teuthology.orchestra.run.vm07.stdout:(37/136): ledmon-libs-1.1.0-3.el9.x86_64.rpm 478 kB/s | 40 kB 00:00 2026-03-10T13:11:29.797 INFO:teuthology.orchestra.run.vm07.stdout:(38/136): libconfig-1.7.2-9.el9.x86_64.rpm 2.1 MB/s | 72 kB 00:00 2026-03-10T13:11:29.828 INFO:teuthology.orchestra.run.vm07.stdout:(39/136): libquadmath-11.5.0-14.el9.x86_64.rpm 5.8 MB/s | 184 kB 00:00 2026-03-10T13:11:29.857 INFO:teuthology.orchestra.run.vm07.stdout:(40/136): mailcap-2.1.49-5.el9.noarch.rpm 1.1 MB/s | 33 kB 00:00 2026-03-10T13:11:29.887 INFO:teuthology.orchestra.run.vm07.stdout:(41/136): pciutils-3.7.0-7.el9.x86_64.rpm 3.0 MB/s | 93 kB 00:00 2026-03-10T13:11:29.910 INFO:teuthology.orchestra.run.vm07.stdout:(42/136): libgfortran-11.5.0-14.el9.x86_64.rpm 5.3 MB/s | 794 kB 00:00 2026-03-10T13:11:29.920 INFO:teuthology.orchestra.run.vm07.stdout:(43/136): python3-cffi-1.14.5-5.el9.x86_64.rpm 7.6 MB/s | 253 kB 00:00 2026-03-10T13:11:29.955 INFO:teuthology.orchestra.run.vm07.stdout:(44/136): python3-ply-3.11-14.el9.noarch.rpm 3.0 MB/s | 106 kB 00:00 2026-03-10T13:11:29.976 INFO:teuthology.orchestra.run.vm07.stdout:(45/136): python3-cryptography-36.0.1-5.el9.x86 19 MB/s | 1.2 MB 00:00 2026-03-10T13:11:29.986 INFO:teuthology.orchestra.run.vm07.stdout:(46/136): python3-pycparser-2.20-6.el9.noarch.r 4.3 MB/s | 135 kB 00:00 2026-03-10T13:11:30.007 INFO:teuthology.orchestra.run.vm07.stdout:(47/136): python3-requests-2.25.1-10.el9.noarch 4.1 MB/s | 126 kB 00:00 2026-03-10T13:11:30.018 INFO:teuthology.orchestra.run.vm07.stdout:(48/136): python3-urllib3-1.26.5-7.el9.noarch.r 6.7 MB/s | 218 kB 00:00 2026-03-10T13:11:30.047 INFO:teuthology.orchestra.run.vm07.stdout:(49/136): unzip-6.0-59.el9.x86_64.rpm 4.4 MB/s | 182 kB 00:00 2026-03-10T13:11:30.079 INFO:teuthology.orchestra.run.vm07.stdout:(50/136): zip-3.0-35.el9.x86_64.rpm 4.2 MB/s | 266 kB 00:00 2026-03-10T13:11:30.249 INFO:teuthology.orchestra.run.vm07.stdout:(51/136): flexiblas-3.0.4-9.el9.x86_64.rpm 179 kB/s | 30 kB 00:00 2026-03-10T13:11:30.321 INFO:teuthology.orchestra.run.vm07.stdout:(52/136): boost-program-options-1.75.0-13.el9.x 380 kB/s | 104 kB 00:00 2026-03-10T13:11:30.694 INFO:teuthology.orchestra.run.vm07.stdout:(53/136): ceph-test-19.2.3-678.ge911bdeb.el9.x8 15 MB/s | 50 MB 00:03 2026-03-10T13:11:30.829 INFO:teuthology.orchestra.run.vm07.stdout:(54/136): flexiblas-netlib-3.0.4-9.el9.x86_64.r 5.2 MB/s | 3.0 MB 00:00 2026-03-10T13:11:30.835 INFO:teuthology.orchestra.run.vm07.stdout:(55/136): flexiblas-openblas-openmp-3.0.4-9.el9 29 kB/s | 15 kB 00:00 2026-03-10T13:11:31.538 INFO:teuthology.orchestra.run.vm07.stdout:(56/136): libnbd-1.20.3-4.el9.x86_64.rpm 194 kB/s | 164 kB 00:00 2026-03-10T13:11:31.565 INFO:teuthology.orchestra.run.vm07.stdout:(57/136): libpmemobj-1.12.1-1.el9.x86_64.rpm 218 kB/s | 160 kB 00:00 2026-03-10T13:11:31.590 INFO:teuthology.orchestra.run.vm07.stdout:(58/136): librabbitmq-0.11.0-7.el9.x86_64.rpm 60 kB/s | 45 kB 00:00 2026-03-10T13:11:31.831 INFO:teuthology.orchestra.run.vm07.stdout:(59/136): librdkafka-1.6.1-102.el9.x86_64.rpm 2.2 MB/s | 662 kB 00:00 2026-03-10T13:11:32.019 INFO:teuthology.orchestra.run.vm07.stdout:(60/136): libxslt-1.1.34-12.el9.x86_64.rpm 544 kB/s | 233 kB 00:00 2026-03-10T13:11:32.284 INFO:teuthology.orchestra.run.vm07.stdout:(61/136): libstoragemgmt-1.10.1-1.el9.x86_64.rp 342 kB/s | 246 kB 00:00 2026-03-10T13:11:32.461 INFO:teuthology.orchestra.run.vm07.stdout:(62/136): lttng-ust-2.12.0-6.el9.x86_64.rpm 464 kB/s | 292 kB 00:00 2026-03-10T13:11:32.686 INFO:teuthology.orchestra.run.vm07.stdout:(63/136): lua-5.4.4-4.el9.x86_64.rpm 282 kB/s | 188 kB 00:00 2026-03-10T13:11:32.829 INFO:teuthology.orchestra.run.vm07.stdout:(64/136): openblas-0.3.29-1.el9.x86_64.rpm 77 kB/s | 42 kB 00:00 2026-03-10T13:11:33.590 INFO:teuthology.orchestra.run.vm07.stdout:(65/136): openblas-openmp-0.3.29-1.el9.x86_64.r 4.7 MB/s | 5.3 MB 00:01 2026-03-10T13:11:34.013 INFO:teuthology.orchestra.run.vm07.stdout:(66/136): python3-devel-3.9.25-3.el9.x86_64.rpm 579 kB/s | 244 kB 00:00 2026-03-10T13:11:34.160 INFO:teuthology.orchestra.run.vm07.stdout:(67/136): python3-jinja2-2.11.3-8.el9.noarch.rp 1.6 MB/s | 249 kB 00:00 2026-03-10T13:11:34.216 INFO:teuthology.orchestra.run.vm07.stdout:(68/136): python3-jmespath-1.0.1-1.el9.noarch.r 856 kB/s | 48 kB 00:00 2026-03-10T13:11:34.328 INFO:teuthology.orchestra.run.vm07.stdout:(69/136): protobuf-3.14.0-17.el9.x86_64.rpm 627 kB/s | 1.0 MB 00:01 2026-03-10T13:11:34.619 INFO:teuthology.orchestra.run.vm07.stdout:(70/136): python3-libstoragemgmt-1.10.1-1.el9.x 439 kB/s | 177 kB 00:00 2026-03-10T13:11:34.692 INFO:teuthology.orchestra.run.vm07.stdout:(71/136): python3-mako-1.1.4-6.el9.noarch.rpm 472 kB/s | 172 kB 00:00 2026-03-10T13:11:34.764 INFO:teuthology.orchestra.run.vm07.stdout:(72/136): python3-markupsafe-1.1.1-12.el9.x86_6 240 kB/s | 35 kB 00:00 2026-03-10T13:11:35.049 INFO:teuthology.orchestra.run.vm07.stdout:(73/136): python3-babel-2.9.1-2.el9.noarch.rpm 2.7 MB/s | 6.0 MB 00:02 2026-03-10T13:11:36.316 INFO:teuthology.orchestra.run.vm07.stdout:(74/136): python3-numpy-1.23.5-2.el9.x86_64.rpm 3.8 MB/s | 6.1 MB 00:01 2026-03-10T13:11:36.668 INFO:teuthology.orchestra.run.vm07.stdout:(75/136): python3-numpy-f2py-1.23.5-2.el9.x86_6 232 kB/s | 442 kB 00:01 2026-03-10T13:11:36.782 INFO:teuthology.orchestra.run.vm07.stdout:(76/136): python3-packaging-20.9-5.el9.noarch.r 45 kB/s | 77 kB 00:01 2026-03-10T13:11:37.032 INFO:teuthology.orchestra.run.vm07.stdout:(77/136): python3-protobuf-3.14.0-17.el9.noarch 374 kB/s | 267 kB 00:00 2026-03-10T13:11:37.187 INFO:teuthology.orchestra.run.vm07.stdout:(78/136): python3-pyasn1-0.4.8-7.el9.noarch.rpm 303 kB/s | 157 kB 00:00 2026-03-10T13:11:37.317 INFO:teuthology.orchestra.run.vm07.stdout:(79/136): python3-pyasn1-modules-0.4.8-7.el9.no 518 kB/s | 277 kB 00:00 2026-03-10T13:11:37.392 INFO:teuthology.orchestra.run.vm07.stdout:(80/136): python3-requests-oauthlib-1.3.0-12.el 149 kB/s | 54 kB 00:00 2026-03-10T13:11:38.957 INFO:teuthology.orchestra.run.vm07.stdout:(81/136): python3-scipy-1.9.3-2.el9.x86_64.rpm 11 MB/s | 19 MB 00:01 2026-03-10T13:11:38.969 INFO:teuthology.orchestra.run.vm07.stdout:(82/136): python3-toml-0.10.2-6.el9.noarch.rpm 25 kB/s | 42 kB 00:01 2026-03-10T13:11:39.001 INFO:teuthology.orchestra.run.vm07.stdout:(83/136): qatlib-25.08.0-2.el9.x86_64.rpm 149 kB/s | 240 kB 00:01 2026-03-10T13:11:39.100 INFO:teuthology.orchestra.run.vm07.stdout:(84/136): qatlib-service-25.08.0-2.el9.x86_64.r 259 kB/s | 37 kB 00:00 2026-03-10T13:11:39.639 INFO:teuthology.orchestra.run.vm07.stdout:(85/136): qatzip-libs-1.3.1-1.el9.x86_64.rpm 104 kB/s | 66 kB 00:00 2026-03-10T13:11:39.668 INFO:teuthology.orchestra.run.vm07.stdout:(86/136): socat-1.7.4.1-8.el9.x86_64.rpm 455 kB/s | 303 kB 00:00 2026-03-10T13:11:39.684 INFO:teuthology.orchestra.run.vm07.stdout:(87/136): xmlstarlet-1.6.1-20.el9.x86_64.rpm 109 kB/s | 64 kB 00:00 2026-03-10T13:11:39.702 INFO:teuthology.orchestra.run.vm07.stdout:(88/136): abseil-cpp-20211102.0-4.el9.x86_64.rp 30 MB/s | 551 kB 00:00 2026-03-10T13:11:39.729 INFO:teuthology.orchestra.run.vm07.stdout:(89/136): gperftools-libs-2.9.1-3.el9.x86_64.rp 11 MB/s | 308 kB 00:00 2026-03-10T13:11:39.732 INFO:teuthology.orchestra.run.vm07.stdout:(90/136): grpc-data-1.46.7-10.el9.noarch.rpm 7.6 MB/s | 19 kB 00:00 2026-03-10T13:11:39.762 INFO:teuthology.orchestra.run.vm07.stdout:(91/136): lua-devel-5.4.4-4.el9.x86_64.rpm 181 kB/s | 22 kB 00:00 2026-03-10T13:11:39.777 INFO:teuthology.orchestra.run.vm07.stdout:(92/136): libarrow-doc-9.0.0-15.el9.noarch.rpm 1.7 MB/s | 25 kB 00:00 2026-03-10T13:11:39.785 INFO:teuthology.orchestra.run.vm07.stdout:(93/136): liboath-2.6.12-1.el9.x86_64.rpm 5.8 MB/s | 49 kB 00:00 2026-03-10T13:11:39.791 INFO:teuthology.orchestra.run.vm07.stdout:(94/136): libunwind-1.6.2-1.el9.x86_64.rpm 12 MB/s | 67 kB 00:00 2026-03-10T13:11:39.802 INFO:teuthology.orchestra.run.vm07.stdout:(95/136): luarocks-3.9.2-5.el9.noarch.rpm 14 MB/s | 151 kB 00:00 2026-03-10T13:11:39.828 INFO:teuthology.orchestra.run.vm07.stdout:(96/136): libarrow-9.0.0-15.el9.x86_64.rpm 46 MB/s | 4.4 MB 00:00 2026-03-10T13:11:39.831 INFO:teuthology.orchestra.run.vm07.stdout:(97/136): parquet-libs-9.0.0-15.el9.x86_64.rpm 28 MB/s | 838 kB 00:00 2026-03-10T13:11:39.835 INFO:teuthology.orchestra.run.vm07.stdout:(98/136): python3-autocommand-2.2.2-8.el9.noarc 8.6 MB/s | 29 kB 00:00 2026-03-10T13:11:39.837 INFO:teuthology.orchestra.run.vm07.stdout:(99/136): python3-asyncssh-2.13.2-5.el9.noarch. 58 MB/s | 548 kB 00:00 2026-03-10T13:11:39.838 INFO:teuthology.orchestra.run.vm07.stdout:(100/136): python3-backports-tarfile-1.2.0-1.el 17 MB/s | 60 kB 00:00 2026-03-10T13:11:39.840 INFO:teuthology.orchestra.run.vm07.stdout:(101/136): python3-bcrypt-3.2.2-1.el9.x86_64.rp 16 MB/s | 43 kB 00:00 2026-03-10T13:11:39.841 INFO:teuthology.orchestra.run.vm07.stdout:(102/136): python3-cachetools-4.2.4-1.el9.noarc 12 MB/s | 32 kB 00:00 2026-03-10T13:11:39.843 INFO:teuthology.orchestra.run.vm07.stdout:(103/136): python3-certifi-2023.05.07-4.el9.noa 6.1 MB/s | 14 kB 00:00 2026-03-10T13:11:39.846 INFO:teuthology.orchestra.run.vm07.stdout:(104/136): python3-cheroot-10.0.1-4.el9.noarch. 36 MB/s | 173 kB 00:00 2026-03-10T13:11:39.862 INFO:teuthology.orchestra.run.vm07.stdout:(105/136): python3-cherrypy-18.6.1-2.el9.noarch 19 MB/s | 358 kB 00:00 2026-03-10T13:11:39.888 INFO:teuthology.orchestra.run.vm07.stdout:(106/136): python3-google-auth-2.45.0-1.el9.noa 5.9 MB/s | 254 kB 00:00 2026-03-10T13:11:39.895 INFO:teuthology.orchestra.run.vm07.stdout:(107/136): python3-grpcio-tools-1.46.7-10.el9.x 22 MB/s | 144 kB 00:00 2026-03-10T13:11:39.898 INFO:teuthology.orchestra.run.vm07.stdout:(108/136): python3-jaraco-8.2.1-3.el9.noarch.rp 4.2 MB/s | 11 kB 00:00 2026-03-10T13:11:39.900 INFO:teuthology.orchestra.run.vm07.stdout:(109/136): python3-jaraco-classes-3.2.1-5.el9.n 7.5 MB/s | 18 kB 00:00 2026-03-10T13:11:39.903 INFO:teuthology.orchestra.run.vm07.stdout:(110/136): python3-jaraco-collections-3.0.0-8.e 8.6 MB/s | 23 kB 00:00 2026-03-10T13:11:39.906 INFO:teuthology.orchestra.run.vm07.stdout:(111/136): python3-jaraco-context-6.0.1-3.el9.n 7.2 MB/s | 20 kB 00:00 2026-03-10T13:11:39.912 INFO:teuthology.orchestra.run.vm07.stdout:(112/136): python3-grpcio-1.46.7-10.el9.x86_64. 40 MB/s | 2.0 MB 00:00 2026-03-10T13:11:39.913 INFO:teuthology.orchestra.run.vm07.stdout:(113/136): python3-jaraco-functools-3.5.0-2.el9 2.9 MB/s | 19 kB 00:00 2026-03-10T13:11:39.915 INFO:teuthology.orchestra.run.vm07.stdout:(114/136): python3-jaraco-text-4.0.0-2.el9.noar 10 MB/s | 26 kB 00:00 2026-03-10T13:11:39.924 INFO:teuthology.orchestra.run.vm07.stdout:(115/136): python3-logutils-0.3.5-21.el9.noarch 5.4 MB/s | 46 kB 00:00 2026-03-10T13:11:39.930 INFO:teuthology.orchestra.run.vm07.stdout:(116/136): python3-more-itertools-8.12.0-2.el9. 12 MB/s | 79 kB 00:00 2026-03-10T13:11:39.934 INFO:teuthology.orchestra.run.vm07.stdout:(117/136): python3-natsort-7.1.1-5.el9.noarch.r 18 MB/s | 58 kB 00:00 2026-03-10T13:11:39.954 INFO:teuthology.orchestra.run.vm07.stdout:(118/136): python3-kubernetes-26.1.0-3.el9.noar 25 MB/s | 1.0 MB 00:00 2026-03-10T13:11:39.956 INFO:teuthology.orchestra.run.vm07.stdout:(119/136): python3-pecan-1.4.2-3.el9.noarch.rpm 12 MB/s | 272 kB 00:00 2026-03-10T13:11:39.957 INFO:teuthology.orchestra.run.vm07.stdout:(120/136): python3-portend-3.1.0-2.el9.noarch.r 6.4 MB/s | 16 kB 00:00 2026-03-10T13:11:39.960 INFO:teuthology.orchestra.run.vm07.stdout:(121/136): python3-pyOpenSSL-21.0.0-1.el9.noarc 23 MB/s | 90 kB 00:00 2026-03-10T13:11:39.960 INFO:teuthology.orchestra.run.vm07.stdout:(122/136): python3-repoze-lru-0.7-16.el9.noarch 9.2 MB/s | 31 kB 00:00 2026-03-10T13:11:39.964 INFO:teuthology.orchestra.run.vm07.stdout:(123/136): python3-rsa-4.9-2.el9.noarch.rpm 15 MB/s | 59 kB 00:00 2026-03-10T13:11:39.968 INFO:teuthology.orchestra.run.vm07.stdout:(124/136): python3-routes-2.5.1-5.el9.noarch.rp 23 MB/s | 188 kB 00:00 2026-03-10T13:11:39.969 INFO:teuthology.orchestra.run.vm07.stdout:(125/136): python3-tempora-5.0.0-2.el9.noarch.r 8.6 MB/s | 36 kB 00:00 2026-03-10T13:11:39.971 INFO:teuthology.orchestra.run.vm07.stdout:(126/136): python3-typing-extensions-4.15.0-1.e 29 MB/s | 86 kB 00:00 2026-03-10T13:11:39.976 INFO:teuthology.orchestra.run.vm07.stdout:(127/136): python3-websocket-client-1.2.3-2.el9 20 MB/s | 90 kB 00:00 2026-03-10T13:11:39.977 INFO:teuthology.orchestra.run.vm07.stdout:(128/136): python3-webob-1.8.8-2.el9.noarch.rpm 28 MB/s | 230 kB 00:00 2026-03-10T13:11:39.980 INFO:teuthology.orchestra.run.vm07.stdout:(129/136): python3-xmltodict-0.12.0-15.el9.noar 8.9 MB/s | 22 kB 00:00 2026-03-10T13:11:39.982 INFO:teuthology.orchestra.run.vm07.stdout:(130/136): python3-zc-lockfile-2.0-10.el9.noarc 8.2 MB/s | 20 kB 00:00 2026-03-10T13:11:39.986 INFO:teuthology.orchestra.run.vm07.stdout:(131/136): python3-werkzeug-2.0.3-3.el9.1.noarc 42 MB/s | 427 kB 00:00 2026-03-10T13:11:39.989 INFO:teuthology.orchestra.run.vm07.stdout:(132/136): re2-20211101-20.el9.x86_64.rpm 28 MB/s | 191 kB 00:00 2026-03-10T13:11:40.059 INFO:teuthology.orchestra.run.vm07.stdout:(133/136): thrift-0.15.0-4.el9.x86_64.rpm 22 MB/s | 1.6 MB 00:00 2026-03-10T13:11:41.131 INFO:teuthology.orchestra.run.vm07.stdout:(134/136): librbd1-19.2.3-678.ge911bdeb.el9.x86 3.0 MB/s | 3.2 MB 00:01 2026-03-10T13:11:41.222 INFO:teuthology.orchestra.run.vm07.stdout:(135/136): librados2-19.2.3-678.ge911bdeb.el9.x 2.8 MB/s | 3.4 MB 00:01 2026-03-10T13:11:41.547 INFO:teuthology.orchestra.run.vm07.stdout:(136/136): protobuf-compiler-3.14.0-17.el9.x86_ 459 kB/s | 862 kB 00:01 2026-03-10T13:11:41.548 INFO:teuthology.orchestra.run.vm07.stdout:-------------------------------------------------------------------------------- 2026-03-10T13:11:41.548 INFO:teuthology.orchestra.run.vm07.stdout:Total 11 MB/s | 210 MB 00:19 2026-03-10T13:11:42.187 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction check 2026-03-10T13:11:42.238 INFO:teuthology.orchestra.run.vm07.stdout:Transaction check succeeded. 2026-03-10T13:11:42.238 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction test 2026-03-10T13:11:43.115 INFO:teuthology.orchestra.run.vm07.stdout:Transaction test succeeded. 2026-03-10T13:11:43.116 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction 2026-03-10T13:11:44.177 INFO:teuthology.orchestra.run.vm07.stdout: Preparing : 1/1 2026-03-10T13:11:44.192 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-more-itertools-8.12.0-2.el9.noarch 1/138 2026-03-10T13:11:44.207 INFO:teuthology.orchestra.run.vm07.stdout: Installing : thrift-0.15.0-4.el9.x86_64 2/138 2026-03-10T13:11:44.404 INFO:teuthology.orchestra.run.vm07.stdout: Installing : lttng-ust-2.12.0-6.el9.x86_64 3/138 2026-03-10T13:11:44.408 INFO:teuthology.orchestra.run.vm07.stdout: Upgrading : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-10T13:11:44.477 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-10T13:11:44.479 INFO:teuthology.orchestra.run.vm07.stdout: Installing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/138 2026-03-10T13:11:44.511 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/138 2026-03-10T13:11:44.520 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 6/138 2026-03-10T13:11:44.524 INFO:teuthology.orchestra.run.vm07.stdout: Installing : librdkafka-1.6.1-102.el9.x86_64 7/138 2026-03-10T13:11:44.528 INFO:teuthology.orchestra.run.vm07.stdout: Installing : librabbitmq-0.11.0-7.el9.x86_64 8/138 2026-03-10T13:11:44.533 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-jaraco-8.2.1-3.el9.noarch 9/138 2026-03-10T13:11:44.544 INFO:teuthology.orchestra.run.vm07.stdout: Installing : libnbd-1.20.3-4.el9.x86_64 10/138 2026-03-10T13:11:44.545 INFO:teuthology.orchestra.run.vm07.stdout: Installing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-10T13:11:44.583 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-10T13:11:44.585 INFO:teuthology.orchestra.run.vm07.stdout: Installing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 12/138 2026-03-10T13:11:44.602 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 12/138 2026-03-10T13:11:44.637 INFO:teuthology.orchestra.run.vm07.stdout: Installing : re2-1:20211101-20.el9.x86_64 13/138 2026-03-10T13:11:44.680 INFO:teuthology.orchestra.run.vm07.stdout: Installing : libarrow-9.0.0-15.el9.x86_64 14/138 2026-03-10T13:11:44.688 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-werkzeug-2.0.3-3.el9.1.noarch 15/138 2026-03-10T13:11:44.717 INFO:teuthology.orchestra.run.vm07.stdout: Installing : liboath-2.6.12-1.el9.x86_64 16/138 2026-03-10T13:11:44.732 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-pyasn1-0.4.8-7.el9.noarch 17/138 2026-03-10T13:11:44.743 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-packaging-20.9-5.el9.noarch 18/138 2026-03-10T13:11:44.753 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-markupsafe-1.1.1-12.el9.x86_64 19/138 2026-03-10T13:11:44.761 INFO:teuthology.orchestra.run.vm07.stdout: Installing : protobuf-3.14.0-17.el9.x86_64 20/138 2026-03-10T13:11:44.766 INFO:teuthology.orchestra.run.vm07.stdout: Installing : lua-5.4.4-4.el9.x86_64 21/138 2026-03-10T13:11:44.772 INFO:teuthology.orchestra.run.vm07.stdout: Installing : flexiblas-3.0.4-9.el9.x86_64 22/138 2026-03-10T13:11:44.804 INFO:teuthology.orchestra.run.vm07.stdout: Installing : unzip-6.0-59.el9.x86_64 23/138 2026-03-10T13:11:44.826 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-urllib3-1.26.5-7.el9.noarch 24/138 2026-03-10T13:11:44.830 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-requests-2.25.1-10.el9.noarch 25/138 2026-03-10T13:11:44.839 INFO:teuthology.orchestra.run.vm07.stdout: Installing : libquadmath-11.5.0-14.el9.x86_64 26/138 2026-03-10T13:11:44.841 INFO:teuthology.orchestra.run.vm07.stdout: Installing : libgfortran-11.5.0-14.el9.x86_64 27/138 2026-03-10T13:11:44.876 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ledmon-libs-1.1.0-3.el9.x86_64 28/138 2026-03-10T13:11:44.885 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 29/138 2026-03-10T13:11:44.901 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 30/138 2026-03-10T13:11:44.920 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 31/138 2026-03-10T13:11:44.929 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-requests-oauthlib-1.3.0-12.el9.noarch 32/138 2026-03-10T13:11:44.965 INFO:teuthology.orchestra.run.vm07.stdout: Installing : zip-3.0-35.el9.x86_64 33/138 2026-03-10T13:11:44.976 INFO:teuthology.orchestra.run.vm07.stdout: Installing : luarocks-3.9.2-5.el9.noarch 34/138 2026-03-10T13:11:44.984 INFO:teuthology.orchestra.run.vm07.stdout: Installing : lua-devel-5.4.4-4.el9.x86_64 35/138 2026-03-10T13:11:45.019 INFO:teuthology.orchestra.run.vm07.stdout: Installing : protobuf-compiler-3.14.0-17.el9.x86_64 36/138 2026-03-10T13:11:45.100 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-mako-1.1.4-6.el9.noarch 37/138 2026-03-10T13:11:45.124 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-pyasn1-modules-0.4.8-7.el9.noarch 38/138 2026-03-10T13:11:45.133 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-rsa-4.9-2.el9.noarch 39/138 2026-03-10T13:11:45.143 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-jaraco-classes-3.2.1-5.el9.noarch 40/138 2026-03-10T13:11:45.150 INFO:teuthology.orchestra.run.vm07.stdout: Installing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 41/138 2026-03-10T13:11:45.155 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-zc-lockfile-2.0-10.el9.noarch 42/138 2026-03-10T13:11:45.176 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-xmltodict-0.12.0-15.el9.noarch 43/138 2026-03-10T13:11:45.207 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-websocket-client-1.2.3-2.el9.noarch 44/138 2026-03-10T13:11:45.214 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-webob-1.8.8-2.el9.noarch 45/138 2026-03-10T13:11:45.221 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-typing-extensions-4.15.0-1.el9.noarch 46/138 2026-03-10T13:11:45.237 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-repoze-lru-0.7-16.el9.noarch 47/138 2026-03-10T13:11:45.250 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-routes-2.5.1-5.el9.noarch 48/138 2026-03-10T13:11:45.263 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-natsort-7.1.1-5.el9.noarch 49/138 2026-03-10T13:11:45.331 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-logutils-0.3.5-21.el9.noarch 50/138 2026-03-10T13:11:45.340 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-pecan-1.4.2-3.el9.noarch 51/138 2026-03-10T13:11:45.352 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-certifi-2023.05.07-4.el9.noarch 52/138 2026-03-10T13:11:45.405 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-cachetools-4.2.4-1.el9.noarch 53/138 2026-03-10T13:11:45.825 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-google-auth-1:2.45.0-1.el9.noarch 54/138 2026-03-10T13:11:45.845 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-kubernetes-1:26.1.0-3.el9.noarch 55/138 2026-03-10T13:11:45.851 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-backports-tarfile-1.2.0-1.el9.noarch 56/138 2026-03-10T13:11:45.860 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-jaraco-context-6.0.1-3.el9.noarch 57/138 2026-03-10T13:11:45.866 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-autocommand-2.2.2-8.el9.noarch 58/138 2026-03-10T13:11:45.875 INFO:teuthology.orchestra.run.vm07.stdout: Installing : libunwind-1.6.2-1.el9.x86_64 59/138 2026-03-10T13:11:45.882 INFO:teuthology.orchestra.run.vm07.stdout: Installing : gperftools-libs-2.9.1-3.el9.x86_64 60/138 2026-03-10T13:11:45.885 INFO:teuthology.orchestra.run.vm07.stdout: Installing : libarrow-doc-9.0.0-15.el9.noarch 61/138 2026-03-10T13:11:45.922 INFO:teuthology.orchestra.run.vm07.stdout: Installing : grpc-data-1.46.7-10.el9.noarch 62/138 2026-03-10T13:11:45.986 INFO:teuthology.orchestra.run.vm07.stdout: Installing : abseil-cpp-20211102.0-4.el9.x86_64 63/138 2026-03-10T13:11:46.001 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-grpcio-1.46.7-10.el9.x86_64 64/138 2026-03-10T13:11:46.009 INFO:teuthology.orchestra.run.vm07.stdout: Installing : socat-1.7.4.1-8.el9.x86_64 65/138 2026-03-10T13:11:46.015 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-toml-0.10.2-6.el9.noarch 66/138 2026-03-10T13:11:46.023 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-jaraco-functools-3.5.0-2.el9.noarch 67/138 2026-03-10T13:11:46.029 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-jaraco-text-4.0.0-2.el9.noarch 68/138 2026-03-10T13:11:46.038 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-jaraco-collections-3.0.0-8.el9.noarch 69/138 2026-03-10T13:11:46.044 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-tempora-5.0.0-2.el9.noarch 70/138 2026-03-10T13:11:46.081 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-portend-3.1.0-2.el9.noarch 71/138 2026-03-10T13:11:46.098 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-protobuf-3.14.0-17.el9.noarch 72/138 2026-03-10T13:11:46.148 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-grpcio-tools-1.46.7-10.el9.x86_64 73/138 2026-03-10T13:11:46.474 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-devel-3.9.25-3.el9.x86_64 74/138 2026-03-10T13:11:46.509 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-babel-2.9.1-2.el9.noarch 75/138 2026-03-10T13:11:46.515 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-jinja2-2.11.3-8.el9.noarch 76/138 2026-03-10T13:11:46.583 INFO:teuthology.orchestra.run.vm07.stdout: Installing : openblas-0.3.29-1.el9.x86_64 77/138 2026-03-10T13:11:46.585 INFO:teuthology.orchestra.run.vm07.stdout: Installing : openblas-openmp-0.3.29-1.el9.x86_64 78/138 2026-03-10T13:11:46.621 INFO:teuthology.orchestra.run.vm07.stdout: Installing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 79/138 2026-03-10T13:11:47.039 INFO:teuthology.orchestra.run.vm07.stdout: Installing : flexiblas-netlib-3.0.4-9.el9.x86_64 80/138 2026-03-10T13:11:47.145 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-numpy-1:1.23.5-2.el9.x86_64 81/138 2026-03-10T13:11:48.042 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 82/138 2026-03-10T13:11:48.072 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-scipy-1.9.3-2.el9.x86_64 83/138 2026-03-10T13:11:48.080 INFO:teuthology.orchestra.run.vm07.stdout: Installing : libxslt-1.1.34-12.el9.x86_64 84/138 2026-03-10T13:11:48.085 INFO:teuthology.orchestra.run.vm07.stdout: Installing : xmlstarlet-1.6.1-20.el9.x86_64 85/138 2026-03-10T13:11:48.269 INFO:teuthology.orchestra.run.vm07.stdout: Installing : libpmemobj-1.12.1-1.el9.x86_64 86/138 2026-03-10T13:11:48.284 INFO:teuthology.orchestra.run.vm07.stdout: Upgrading : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 87/138 2026-03-10T13:11:48.319 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 87/138 2026-03-10T13:11:48.324 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 88/138 2026-03-10T13:11:48.332 INFO:teuthology.orchestra.run.vm07.stdout: Installing : boost-program-options-1.75.0-13.el9.x86_64 89/138 2026-03-10T13:11:48.622 INFO:teuthology.orchestra.run.vm07.stdout: Installing : parquet-libs-9.0.0-15.el9.x86_64 90/138 2026-03-10T13:11:48.653 INFO:teuthology.orchestra.run.vm07.stdout: Installing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 91/138 2026-03-10T13:11:48.675 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 91/138 2026-03-10T13:11:48.678 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 92/138 2026-03-10T13:11:49.954 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-10T13:11:50.041 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-10T13:11:50.065 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-10T13:11:50.087 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-ply-3.11-14.el9.noarch 94/138 2026-03-10T13:11:50.111 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-pycparser-2.20-6.el9.noarch 95/138 2026-03-10T13:11:50.219 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-cffi-1.14.5-5.el9.x86_64 96/138 2026-03-10T13:11:50.236 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-cryptography-36.0.1-5.el9.x86_64 97/138 2026-03-10T13:11:50.272 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-pyOpenSSL-21.0.0-1.el9.noarch 98/138 2026-03-10T13:11:50.317 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-cheroot-10.0.1-4.el9.noarch 99/138 2026-03-10T13:11:50.432 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-cherrypy-18.6.1-2.el9.noarch 100/138 2026-03-10T13:11:50.446 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-asyncssh-2.13.2-5.el9.noarch 101/138 2026-03-10T13:11:50.453 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-bcrypt-3.2.2-1.el9.x86_64 102/138 2026-03-10T13:11:50.463 INFO:teuthology.orchestra.run.vm07.stdout: Installing : pciutils-3.7.0-7.el9.x86_64 103/138 2026-03-10T13:11:50.468 INFO:teuthology.orchestra.run.vm07.stdout: Installing : qatlib-25.08.0-2.el9.x86_64 104/138 2026-03-10T13:11:50.471 INFO:teuthology.orchestra.run.vm07.stdout: Installing : qatlib-service-25.08.0-2.el9.x86_64 105/138 2026-03-10T13:11:50.494 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 105/138 2026-03-10T13:11:50.864 INFO:teuthology.orchestra.run.vm07.stdout: Installing : qatzip-libs-1.3.1-1.el9.x86_64 106/138 2026-03-10T13:11:50.874 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 107/138 2026-03-10T13:11:50.915 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 107/138 2026-03-10T13:11:50.915 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /usr/lib/systemd/system/ceph.target. 2026-03-10T13:11:50.915 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /usr/lib/systemd/system/ceph-crash.service. 2026-03-10T13:11:50.915 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:11:50.923 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 108/138 2026-03-10T13:11:58.182 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 108/138 2026-03-10T13:11:58.183 INFO:teuthology.orchestra.run.vm07.stdout:skipping the directory /sys 2026-03-10T13:11:58.183 INFO:teuthology.orchestra.run.vm07.stdout:skipping the directory /proc 2026-03-10T13:11:58.183 INFO:teuthology.orchestra.run.vm07.stdout:skipping the directory /mnt 2026-03-10T13:11:58.183 INFO:teuthology.orchestra.run.vm07.stdout:skipping the directory /var/tmp 2026-03-10T13:11:58.183 INFO:teuthology.orchestra.run.vm07.stdout:skipping the directory /home 2026-03-10T13:11:58.183 INFO:teuthology.orchestra.run.vm07.stdout:skipping the directory /root 2026-03-10T13:11:58.183 INFO:teuthology.orchestra.run.vm07.stdout:skipping the directory /tmp 2026-03-10T13:11:58.183 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:11:58.316 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 109/138 2026-03-10T13:11:58.344 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 109/138 2026-03-10T13:11:58.344 INFO:teuthology.orchestra.run.vm07.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T13:11:58.344 INFO:teuthology.orchestra.run.vm07.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-10T13:11:58.344 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-10T13:11:58.344 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-10T13:11:58.344 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:11:58.585 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 110/138 2026-03-10T13:11:58.610 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 110/138 2026-03-10T13:11:58.610 INFO:teuthology.orchestra.run.vm07.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T13:11:58.610 INFO:teuthology.orchestra.run.vm07.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-10T13:11:58.610 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-10T13:11:58.610 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-10T13:11:58.610 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:11:58.619 INFO:teuthology.orchestra.run.vm07.stdout: Installing : mailcap-2.1.49-5.el9.noarch 111/138 2026-03-10T13:11:58.622 INFO:teuthology.orchestra.run.vm07.stdout: Installing : libconfig-1.7.2-9.el9.x86_64 112/138 2026-03-10T13:11:58.641 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-10T13:11:58.641 INFO:teuthology.orchestra.run.vm07.stdout:Creating group 'qat' with GID 994. 2026-03-10T13:11:58.641 INFO:teuthology.orchestra.run.vm07.stdout:Creating group 'libstoragemgmt' with GID 993. 2026-03-10T13:11:58.641 INFO:teuthology.orchestra.run.vm07.stdout:Creating user 'libstoragemgmt' (daemon account for libstoragemgmt) with UID 993 and GID 993. 2026-03-10T13:11:58.641 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:11:58.652 INFO:teuthology.orchestra.run.vm07.stdout: Installing : libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-10T13:11:58.683 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-10T13:11:58.683 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/libstoragemgmt.service → /usr/lib/systemd/system/libstoragemgmt.service. 2026-03-10T13:11:58.683 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:11:58.732 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 114/138 2026-03-10T13:11:58.808 INFO:teuthology.orchestra.run.vm07.stdout: Installing : cryptsetup-2.8.1-3.el9.x86_64 115/138 2026-03-10T13:11:58.815 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 116/138 2026-03-10T13:11:58.830 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 116/138 2026-03-10T13:11:58.830 INFO:teuthology.orchestra.run.vm07.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T13:11:58.830 INFO:teuthology.orchestra.run.vm07.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-10T13:11:58.830 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:11:59.658 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 117/138 2026-03-10T13:11:59.683 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 117/138 2026-03-10T13:11:59.684 INFO:teuthology.orchestra.run.vm07.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T13:11:59.684 INFO:teuthology.orchestra.run.vm07.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-10T13:11:59.684 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-10T13:11:59.684 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-10T13:11:59.684 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:11:59.743 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 118/138 2026-03-10T13:11:59.747 INFO:teuthology.orchestra.run.vm07.stdout: Installing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 118/138 2026-03-10T13:11:59.754 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 119/138 2026-03-10T13:11:59.777 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 120/138 2026-03-10T13:11:59.781 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 121/138 2026-03-10T13:12:00.347 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 121/138 2026-03-10T13:12:00.418 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 122/138 2026-03-10T13:12:00.973 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 122/138 2026-03-10T13:12:00.976 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 123/138 2026-03-10T13:12:01.043 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 123/138 2026-03-10T13:12:01.104 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 124/138 2026-03-10T13:12:01.107 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 125/138 2026-03-10T13:12:01.133 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 125/138 2026-03-10T13:12:01.133 INFO:teuthology.orchestra.run.vm07.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T13:12:01.133 INFO:teuthology.orchestra.run.vm07.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-10T13:12:01.133 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-10T13:12:01.133 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-10T13:12:01.133 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:12:01.149 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 126/138 2026-03-10T13:12:01.161 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 126/138 2026-03-10T13:12:01.739 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 127/138 2026-03-10T13:12:01.744 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 128/138 2026-03-10T13:12:01.765 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 128/138 2026-03-10T13:12:01.765 INFO:teuthology.orchestra.run.vm07.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T13:12:01.765 INFO:teuthology.orchestra.run.vm07.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-10T13:12:01.766 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-10T13:12:01.766 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-10T13:12:01.766 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:12:01.777 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 129/138 2026-03-10T13:12:01.799 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 129/138 2026-03-10T13:12:01.799 INFO:teuthology.orchestra.run.vm07.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T13:12:01.799 INFO:teuthology.orchestra.run.vm07.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-10T13:12:01.799 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:12:01.968 INFO:teuthology.orchestra.run.vm07.stdout: Installing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 130/138 2026-03-10T13:12:01.991 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 130/138 2026-03-10T13:12:01.991 INFO:teuthology.orchestra.run.vm07.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T13:12:01.991 INFO:teuthology.orchestra.run.vm07.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-10T13:12:01.991 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-10T13:12:01.991 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-10T13:12:01.991 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:12:04.908 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 131/138 2026-03-10T13:12:04.919 INFO:teuthology.orchestra.run.vm07.stdout: Installing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 132/138 2026-03-10T13:12:04.924 INFO:teuthology.orchestra.run.vm07.stdout: Installing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 133/138 2026-03-10T13:12:04.981 INFO:teuthology.orchestra.run.vm07.stdout: Installing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 134/138 2026-03-10T13:12:04.991 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 135/138 2026-03-10T13:12:04.995 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-jmespath-1.0.1-1.el9.noarch 136/138 2026-03-10T13:12:04.995 INFO:teuthology.orchestra.run.vm07.stdout: Cleanup : librbd1-2:16.2.4-5.el9.x86_64 137/138 2026-03-10T13:12:05.013 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: librbd1-2:16.2.4-5.el9.x86_64 137/138 2026-03-10T13:12:05.013 INFO:teuthology.orchestra.run.vm07.stdout: Cleanup : librados2-2:16.2.4-5.el9.x86_64 138/138 2026-03-10T13:12:06.410 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: librados2-2:16.2.4-5.el9.x86_64 138/138 2026-03-10T13:12:06.410 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/138 2026-03-10T13:12:06.410 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/138 2026-03-10T13:12:06.410 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/138 2026-03-10T13:12:06.410 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-10T13:12:06.410 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/138 2026-03-10T13:12:06.410 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 6/138 2026-03-10T13:12:06.410 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 7/138 2026-03-10T13:12:06.410 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/138 2026-03-10T13:12:06.410 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 9/138 2026-03-10T13:12:06.410 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 10/138 2026-03-10T13:12:06.410 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-10T13:12:06.410 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 12/138 2026-03-10T13:12:06.410 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 13/138 2026-03-10T13:12:06.410 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 14/138 2026-03-10T13:12:06.410 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 15/138 2026-03-10T13:12:06.410 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 16/138 2026-03-10T13:12:06.411 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 17/138 2026-03-10T13:12:06.411 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 18/138 2026-03-10T13:12:06.411 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 19/138 2026-03-10T13:12:06.411 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 20/138 2026-03-10T13:12:06.411 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 21/138 2026-03-10T13:12:06.411 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 22/138 2026-03-10T13:12:06.411 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 23/138 2026-03-10T13:12:06.411 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 24/138 2026-03-10T13:12:06.411 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 25/138 2026-03-10T13:12:06.411 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 26/138 2026-03-10T13:12:06.411 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 27/138 2026-03-10T13:12:06.411 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 28/138 2026-03-10T13:12:06.411 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 29/138 2026-03-10T13:12:06.411 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 30/138 2026-03-10T13:12:06.411 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 31/138 2026-03-10T13:12:06.411 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 32/138 2026-03-10T13:12:06.411 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 33/138 2026-03-10T13:12:06.411 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 34/138 2026-03-10T13:12:06.411 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 35/138 2026-03-10T13:12:06.411 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 36/138 2026-03-10T13:12:06.411 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 37/138 2026-03-10T13:12:06.411 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 38/138 2026-03-10T13:12:06.411 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 39/138 2026-03-10T13:12:06.411 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 40/138 2026-03-10T13:12:06.412 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 41/138 2026-03-10T13:12:06.412 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 42/138 2026-03-10T13:12:06.412 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 43/138 2026-03-10T13:12:06.413 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/138 2026-03-10T13:12:06.413 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 45/138 2026-03-10T13:12:06.413 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-ply-3.11-14.el9.noarch 46/138 2026-03-10T13:12:06.413 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 47/138 2026-03-10T13:12:06.413 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 48/138 2026-03-10T13:12:06.413 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 49/138 2026-03-10T13:12:06.413 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : unzip-6.0-59.el9.x86_64 50/138 2026-03-10T13:12:06.413 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : zip-3.0-35.el9.x86_64 51/138 2026-03-10T13:12:06.413 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 52/138 2026-03-10T13:12:06.413 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 53/138 2026-03-10T13:12:06.413 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 54/138 2026-03-10T13:12:06.413 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 55/138 2026-03-10T13:12:06.413 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 56/138 2026-03-10T13:12:06.413 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 57/138 2026-03-10T13:12:06.413 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 58/138 2026-03-10T13:12:06.413 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 59/138 2026-03-10T13:12:06.413 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 60/138 2026-03-10T13:12:06.413 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 61/138 2026-03-10T13:12:06.413 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 62/138 2026-03-10T13:12:06.413 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : lua-5.4.4-4.el9.x86_64 63/138 2026-03-10T13:12:06.413 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 64/138 2026-03-10T13:12:06.413 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 65/138 2026-03-10T13:12:06.413 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 66/138 2026-03-10T13:12:06.413 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 67/138 2026-03-10T13:12:06.413 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 68/138 2026-03-10T13:12:06.413 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 69/138 2026-03-10T13:12:06.413 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jmespath-1.0.1-1.el9.noarch 70/138 2026-03-10T13:12:06.413 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 71/138 2026-03-10T13:12:06.413 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 72/138 2026-03-10T13:12:06.413 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 73/138 2026-03-10T13:12:06.413 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 74/138 2026-03-10T13:12:06.413 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 75/138 2026-03-10T13:12:06.413 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 76/138 2026-03-10T13:12:06.413 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 77/138 2026-03-10T13:12:06.413 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 78/138 2026-03-10T13:12:06.413 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 79/138 2026-03-10T13:12:06.414 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 80/138 2026-03-10T13:12:06.414 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 81/138 2026-03-10T13:12:06.414 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 82/138 2026-03-10T13:12:06.414 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 83/138 2026-03-10T13:12:06.414 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 84/138 2026-03-10T13:12:06.414 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 85/138 2026-03-10T13:12:06.414 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 86/138 2026-03-10T13:12:06.414 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 87/138 2026-03-10T13:12:06.414 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 88/138 2026-03-10T13:12:06.414 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 89/138 2026-03-10T13:12:06.414 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 90/138 2026-03-10T13:12:06.414 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 91/138 2026-03-10T13:12:06.414 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 92/138 2026-03-10T13:12:06.415 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 93/138 2026-03-10T13:12:06.415 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 94/138 2026-03-10T13:12:06.415 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 95/138 2026-03-10T13:12:06.415 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 96/138 2026-03-10T13:12:06.415 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 97/138 2026-03-10T13:12:06.415 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 98/138 2026-03-10T13:12:06.415 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 99/138 2026-03-10T13:12:06.415 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 100/138 2026-03-10T13:12:06.415 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 101/138 2026-03-10T13:12:06.415 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 102/138 2026-03-10T13:12:06.415 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 103/138 2026-03-10T13:12:06.415 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 104/138 2026-03-10T13:12:06.415 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 105/138 2026-03-10T13:12:06.415 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 106/138 2026-03-10T13:12:06.415 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 107/138 2026-03-10T13:12:06.415 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 108/138 2026-03-10T13:12:06.415 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 109/138 2026-03-10T13:12:06.415 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 110/138 2026-03-10T13:12:06.415 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 111/138 2026-03-10T13:12:06.415 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 112/138 2026-03-10T13:12:06.415 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 113/138 2026-03-10T13:12:06.415 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 114/138 2026-03-10T13:12:06.415 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 115/138 2026-03-10T13:12:06.415 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 116/138 2026-03-10T13:12:06.415 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 117/138 2026-03-10T13:12:06.415 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 118/138 2026-03-10T13:12:06.415 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 119/138 2026-03-10T13:12:06.415 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 120/138 2026-03-10T13:12:06.415 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 121/138 2026-03-10T13:12:06.415 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 122/138 2026-03-10T13:12:06.415 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 123/138 2026-03-10T13:12:06.415 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 124/138 2026-03-10T13:12:06.415 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 125/138 2026-03-10T13:12:06.415 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 126/138 2026-03-10T13:12:06.415 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 127/138 2026-03-10T13:12:06.415 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 128/138 2026-03-10T13:12:06.415 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 129/138 2026-03-10T13:12:06.415 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 130/138 2026-03-10T13:12:06.415 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-xmltodict-0.12.0-15.el9.noarch 131/138 2026-03-10T13:12:06.415 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 132/138 2026-03-10T13:12:06.415 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : re2-1:20211101-20.el9.x86_64 133/138 2026-03-10T13:12:06.416 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 134/138 2026-03-10T13:12:06.416 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 135/138 2026-03-10T13:12:06.416 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : librados2-2:16.2.4-5.el9.x86_64 136/138 2026-03-10T13:12:06.416 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 137/138 2026-03-10T13:12:06.522 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : librbd1-2:16.2.4-5.el9.x86_64 138/138 2026-03-10T13:12:06.522 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:12:06.522 INFO:teuthology.orchestra.run.vm07.stdout:Upgraded: 2026-03-10T13:12:06.522 INFO:teuthology.orchestra.run.vm07.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:12:06.522 INFO:teuthology.orchestra.run.vm07.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:12:06.522 INFO:teuthology.orchestra.run.vm07.stdout:Installed: 2026-03-10T13:12:06.522 INFO:teuthology.orchestra.run.vm07.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-10T13:12:06.522 INFO:teuthology.orchestra.run.vm07.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-10T13:12:06.522 INFO:teuthology.orchestra.run.vm07.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:12:06.522 INFO:teuthology.orchestra.run.vm07.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:12:06.522 INFO:teuthology.orchestra.run.vm07.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:12:06.522 INFO:teuthology.orchestra.run.vm07.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:12:06.522 INFO:teuthology.orchestra.run.vm07.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:12:06.522 INFO:teuthology.orchestra.run.vm07.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:12:06.522 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:12:06.522 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:12:06.522 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:12:06.522 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:12:06.522 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:12:06.522 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:12:06.522 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: libxslt-1.1.34-12.el9.x86_64 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: lua-5.4.4-4.el9.x86_64 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: mailcap-2.1.49-5.el9.noarch 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-10T13:12:06.523 INFO:teuthology.orchestra.run.vm07.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-jmespath-1.0.1-1.el9.noarch 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-ply-3.11-14.el9.noarch 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-xmltodict-0.12.0-15.el9.noarch 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: re2-1:20211101-20.el9.x86_64 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: socat-1.7.4.1-8.el9.x86_64 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: unzip-6.0-59.el9.x86_64 2026-03-10T13:12:06.524 INFO:teuthology.orchestra.run.vm07.stdout: xmlstarlet-1.6.1-20.el9.x86_64 2026-03-10T13:12:06.525 INFO:teuthology.orchestra.run.vm07.stdout: zip-3.0-35.el9.x86_64 2026-03-10T13:12:06.525 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:12:06.525 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-10T13:12:06.617 DEBUG:teuthology.parallel:result is None 2026-03-10T13:12:06.617 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T13:12:07.235 DEBUG:teuthology.orchestra.run.vm07:> rpm -q ceph --qf '%{VERSION}-%{RELEASE}' 2026-03-10T13:12:07.253 INFO:teuthology.orchestra.run.vm07.stdout:19.2.3-678.ge911bdeb.el9 2026-03-10T13:12:07.253 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678.ge911bdeb.el9 2026-03-10T13:12:07.253 INFO:teuthology.task.install:The correct ceph version 19.2.3-678.ge911bdeb is installed. 2026-03-10T13:12:07.254 INFO:teuthology.task.install.util:Shipping valgrind.supp... 2026-03-10T13:12:07.254 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-10T13:12:07.254 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-10T13:12:07.319 INFO:teuthology.task.install.util:Shipping 'daemon-helper'... 2026-03-10T13:12:07.320 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-10T13:12:07.320 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/usr/bin/daemon-helper 2026-03-10T13:12:07.383 DEBUG:teuthology.orchestra.run.vm07:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-10T13:12:07.448 INFO:teuthology.task.install.util:Shipping 'adjust-ulimits'... 2026-03-10T13:12:07.448 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-10T13:12:07.448 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-10T13:12:07.511 DEBUG:teuthology.orchestra.run.vm07:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-10T13:12:07.574 INFO:teuthology.task.install.util:Shipping 'stdin-killer'... 2026-03-10T13:12:07.574 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-10T13:12:07.574 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/usr/bin/stdin-killer 2026-03-10T13:12:07.637 DEBUG:teuthology.orchestra.run.vm07:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-10T13:12:07.700 INFO:teuthology.run_tasks:Running task cephadm... 2026-03-10T13:12:07.743 INFO:tasks.cephadm:Config: {'conf': {'global': {'mon election default strategy': 3}, 'mgr': {'debug mgr': 20, 'debug ms': 1, 'mgr/cephadm/use_agent': True}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20}, 'osd': {'debug ms': 1, 'debug osd': 20, 'osd mclock iops capacity threshold hdd': 49000}}, 'flavor': 'default', 'log-ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', 'CEPHADM_FAILED_DAEMON'], 'log-only-match': ['CEPHADM_'], 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'} 2026-03-10T13:12:07.743 INFO:tasks.cephadm:Cluster image is quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T13:12:07.743 INFO:tasks.cephadm:Cluster fsid is bd98ed20-1c82-11f1-9239-ff903ae4ee6f 2026-03-10T13:12:07.743 INFO:tasks.cephadm:Choosing monitor IPs and ports... 2026-03-10T13:12:07.743 INFO:tasks.cephadm:Monitor IPs: {'mon.a': '192.168.123.107'} 2026-03-10T13:12:07.743 INFO:tasks.cephadm:First mon is mon.a on vm07 2026-03-10T13:12:07.743 INFO:tasks.cephadm:First mgr is a 2026-03-10T13:12:07.744 INFO:tasks.cephadm:Normalizing hostnames... 2026-03-10T13:12:07.744 DEBUG:teuthology.orchestra.run.vm07:> sudo hostname $(hostname -s) 2026-03-10T13:12:07.766 INFO:tasks.cephadm:Downloading "compiled" cephadm from cachra 2026-03-10T13:12:07.767 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T13:12:08.440 INFO:tasks.cephadm:builder_project result: [{'url': 'https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/', 'chacra_url': 'https://3.chacra.ceph.com/repos/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/', 'ref': 'squid', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'distro': 'centos', 'distro_version': '9', 'distro_codename': None, 'modified': '2026-02-25 18:55:15.146628', 'status': 'ready', 'flavor': 'default', 'project': 'ceph', 'archs': ['source', 'x86_64'], 'extra': {'version': '19.2.3-678-ge911bdeb', 'package_manager_version': '19.2.3-678.ge911bdeb', 'build_url': 'https://jenkins.ceph.com/job/ceph-dev-pipeline/3275/', 'root_build_cause': '', 'node_name': '10.20.192.26+soko16', 'job_name': 'ceph-dev-pipeline'}}] 2026-03-10T13:12:09.096 INFO:tasks.util.chacra:got chacra host 3.chacra.ceph.com, ref squid, sha1 e911bdebe5c8faa3800735d1568fcdca65db60df from https://shaman.ceph.com/api/search/?project=ceph&distros=centos%2F9%2Fx86_64&flavor=default&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T13:12:09.098 INFO:tasks.cephadm:Discovered cachra url: https://3.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/x86_64/flavors/default/cephadm 2026-03-10T13:12:09.098 INFO:tasks.cephadm:Downloading cephadm from url: https://3.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/x86_64/flavors/default/cephadm 2026-03-10T13:12:09.098 DEBUG:teuthology.orchestra.run.vm07:> curl --silent -L https://3.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-10T13:12:10.547 INFO:teuthology.orchestra.run.vm07.stdout:-rw-r--r--. 1 ubuntu ubuntu 788355 Mar 10 13:12 /home/ubuntu/cephtest/cephadm 2026-03-10T13:12:10.547 DEBUG:teuthology.orchestra.run.vm07:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-10T13:12:10.564 INFO:tasks.cephadm:Pulling image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df on all hosts... 2026-03-10T13:12:10.564 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-10T13:12:10.749 INFO:teuthology.orchestra.run.vm07.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T13:12:45.783 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-10T13:12:45.783 INFO:teuthology.orchestra.run.vm07.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-10T13:12:45.783 INFO:teuthology.orchestra.run.vm07.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-10T13:12:45.784 INFO:teuthology.orchestra.run.vm07.stdout: "repo_digests": [ 2026-03-10T13:12:45.784 INFO:teuthology.orchestra.run.vm07.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-10T13:12:45.784 INFO:teuthology.orchestra.run.vm07.stdout: ] 2026-03-10T13:12:45.784 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-10T13:12:45.801 DEBUG:teuthology.orchestra.run.vm07:> sudo mkdir -p /etc/ceph 2026-03-10T13:12:45.826 DEBUG:teuthology.orchestra.run.vm07:> sudo chmod 777 /etc/ceph 2026-03-10T13:12:45.889 INFO:tasks.cephadm:Writing seed config... 2026-03-10T13:12:45.889 INFO:tasks.cephadm: override: [global] mon election default strategy = 3 2026-03-10T13:12:45.890 INFO:tasks.cephadm: override: [mgr] debug mgr = 20 2026-03-10T13:12:45.890 INFO:tasks.cephadm: override: [mgr] debug ms = 1 2026-03-10T13:12:45.890 INFO:tasks.cephadm: override: [mgr] mgr/cephadm/use_agent = True 2026-03-10T13:12:45.890 INFO:tasks.cephadm: override: [mon] debug mon = 20 2026-03-10T13:12:45.890 INFO:tasks.cephadm: override: [mon] debug ms = 1 2026-03-10T13:12:45.890 INFO:tasks.cephadm: override: [mon] debug paxos = 20 2026-03-10T13:12:45.890 INFO:tasks.cephadm: override: [osd] debug ms = 1 2026-03-10T13:12:45.890 INFO:tasks.cephadm: override: [osd] debug osd = 20 2026-03-10T13:12:45.890 INFO:tasks.cephadm: override: [osd] osd mclock iops capacity threshold hdd = 49000 2026-03-10T13:12:45.890 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-10T13:12:45.890 DEBUG:teuthology.orchestra.run.vm07:> dd of=/home/ubuntu/cephtest/seed.ceph.conf 2026-03-10T13:12:45.945 DEBUG:tasks.cephadm:Final config: [global] # make logging friendly to teuthology log_to_file = true log_to_stderr = false log to journald = false mon cluster log to file = true mon cluster log file level = debug mon clock drift allowed = 1.000 # replicate across OSDs, not hosts osd crush chooseleaf type = 0 #osd pool default size = 2 osd pool default erasure code profile = plugin=jerasure technique=reed_sol_van k=2 m=1 crush-failure-domain=osd # enable some debugging auth debug = true ms die on old message = true ms die on bug = true debug asserts on shutdown = true # adjust warnings mon max pg per osd = 10000# >= luminous mon pg warn max object skew = 0 mon osd allow primary affinity = true mon osd allow pg remap = true mon warn on legacy crush tunables = false mon warn on crush straw calc version zero = false mon warn on no sortbitwise = false mon warn on osd down out interval zero = false mon warn on too few osds = false mon_warn_on_pool_pg_num_not_power_of_two = false # disable pg_autoscaler by default for new pools osd_pool_default_pg_autoscale_mode = off # tests delete pools mon allow pool delete = true fsid = bd98ed20-1c82-11f1-9239-ff903ae4ee6f mon election default strategy = 3 [osd] osd scrub load threshold = 5.0 osd scrub max interval = 600 osd mclock profile = high_recovery_ops osd recover clone overlap = true osd recovery max chunk = 1048576 osd deep scrub update digest min age = 30 osd map max advance = 10 osd memory target autotune = true # debugging osd debug shutdown = true osd debug op order = true osd debug verify stray on activate = true osd debug pg log writeout = true osd debug verify cached snaps = true osd debug verify missing on start = true osd debug misdirected ops = true osd op queue = debug_random osd op queue cut off = debug_random osd shutdown pgref assert = true bdev debug aio = true osd sloppy crc = true debug ms = 1 debug osd = 20 osd mclock iops capacity threshold hdd = 49000 [mgr] mon reweight min pgs per osd = 4 mon reweight min bytes per osd = 10 mgr/telemetry/nag = false debug mgr = 20 debug ms = 1 mgr/cephadm/use_agent = True [mon] mon data avail warn = 5 mon mgr mkfs grace = 240 mon reweight min pgs per osd = 4 mon osd reporter subtree level = osd mon osd prime pg temp = true mon reweight min bytes per osd = 10 # rotate auth tickets quickly to exercise renewal paths auth mon ticket ttl = 660# 11m auth service ticket ttl = 240# 4m # don't complain about global id reclaim mon_warn_on_insecure_global_id_reclaim = false mon_warn_on_insecure_global_id_reclaim_allowed = false debug mon = 20 debug ms = 1 debug paxos = 20 [client.rgw] rgw cache enabled = true rgw enable ops log = true rgw enable usage log = true 2026-03-10T13:12:45.945 DEBUG:teuthology.orchestra.run.vm07:mon.a> sudo journalctl -f -n 0 -u ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@mon.a.service 2026-03-10T13:12:45.987 DEBUG:teuthology.orchestra.run.vm07:mgr.a> sudo journalctl -f -n 0 -u ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@mgr.a.service 2026-03-10T13:12:46.029 INFO:tasks.cephadm:Bootstrapping... 2026-03-10T13:12:46.030 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df -v bootstrap --fsid bd98ed20-1c82-11f1-9239-ff903ae4ee6f --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id a --orphan-initial-daemons --skip-monitoring-stack --mon-ip 192.168.123.107 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring 2026-03-10T13:12:46.167 INFO:teuthology.orchestra.run.vm07.stdout:-------------------------------------------------------------------------------- 2026-03-10T13:12:46.167 INFO:teuthology.orchestra.run.vm07.stdout:cephadm ['--image', 'quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df', '-v', 'bootstrap', '--fsid', 'bd98ed20-1c82-11f1-9239-ff903ae4ee6f', '--config', '/home/ubuntu/cephtest/seed.ceph.conf', '--output-config', '/etc/ceph/ceph.conf', '--output-keyring', '/etc/ceph/ceph.client.admin.keyring', '--output-pub-ssh-key', '/home/ubuntu/cephtest/ceph.pub', '--mon-id', 'a', '--mgr-id', 'a', '--orphan-initial-daemons', '--skip-monitoring-stack', '--mon-ip', '192.168.123.107', '--skip-admin-label'] 2026-03-10T13:12:46.167 INFO:teuthology.orchestra.run.vm07.stderr:Specifying an fsid for your cluster offers no advantages and may increase the likelihood of fsid conflicts. 2026-03-10T13:12:46.167 INFO:teuthology.orchestra.run.vm07.stdout:Verifying podman|docker is present... 2026-03-10T13:12:46.186 INFO:teuthology.orchestra.run.vm07.stdout:/bin/podman: stdout 5.8.0 2026-03-10T13:12:46.186 INFO:teuthology.orchestra.run.vm07.stdout:Verifying lvm2 is present... 2026-03-10T13:12:46.186 INFO:teuthology.orchestra.run.vm07.stdout:Verifying time synchronization is in place... 2026-03-10T13:12:46.193 INFO:teuthology.orchestra.run.vm07.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-10T13:12:46.193 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-10T13:12:46.198 INFO:teuthology.orchestra.run.vm07.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-10T13:12:46.198 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stdout inactive 2026-03-10T13:12:46.203 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stdout enabled 2026-03-10T13:12:46.208 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stdout active 2026-03-10T13:12:46.208 INFO:teuthology.orchestra.run.vm07.stdout:Unit chronyd.service is enabled and running 2026-03-10T13:12:46.208 INFO:teuthology.orchestra.run.vm07.stdout:Repeating the final host check... 2026-03-10T13:12:46.225 INFO:teuthology.orchestra.run.vm07.stdout:/bin/podman: stdout 5.8.0 2026-03-10T13:12:46.225 INFO:teuthology.orchestra.run.vm07.stdout:podman (/bin/podman) version 5.8.0 is present 2026-03-10T13:12:46.225 INFO:teuthology.orchestra.run.vm07.stdout:systemctl is present 2026-03-10T13:12:46.225 INFO:teuthology.orchestra.run.vm07.stdout:lvcreate is present 2026-03-10T13:12:46.231 INFO:teuthology.orchestra.run.vm07.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-10T13:12:46.231 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-10T13:12:46.235 INFO:teuthology.orchestra.run.vm07.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-10T13:12:46.236 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stdout inactive 2026-03-10T13:12:46.240 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stdout enabled 2026-03-10T13:12:46.245 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stdout active 2026-03-10T13:12:46.245 INFO:teuthology.orchestra.run.vm07.stdout:Unit chronyd.service is enabled and running 2026-03-10T13:12:46.245 INFO:teuthology.orchestra.run.vm07.stdout:Host looks OK 2026-03-10T13:12:46.245 INFO:teuthology.orchestra.run.vm07.stdout:Cluster fsid: bd98ed20-1c82-11f1-9239-ff903ae4ee6f 2026-03-10T13:12:46.245 INFO:teuthology.orchestra.run.vm07.stdout:Acquiring lock 139774363274400 on /run/cephadm/bd98ed20-1c82-11f1-9239-ff903ae4ee6f.lock 2026-03-10T13:12:46.245 INFO:teuthology.orchestra.run.vm07.stdout:Lock 139774363274400 acquired on /run/cephadm/bd98ed20-1c82-11f1-9239-ff903ae4ee6f.lock 2026-03-10T13:12:46.245 INFO:teuthology.orchestra.run.vm07.stdout:Verifying IP 192.168.123.107 port 3300 ... 2026-03-10T13:12:46.246 INFO:teuthology.orchestra.run.vm07.stdout:Verifying IP 192.168.123.107 port 6789 ... 2026-03-10T13:12:46.246 INFO:teuthology.orchestra.run.vm07.stdout:Base mon IP(s) is [192.168.123.107:3300, 192.168.123.107:6789], mon addrv is [v2:192.168.123.107:3300,v1:192.168.123.107:6789] 2026-03-10T13:12:46.248 INFO:teuthology.orchestra.run.vm07.stdout:/sbin/ip: stdout default via 192.168.123.1 dev eth0 proto dhcp src 192.168.123.107 metric 100 2026-03-10T13:12:46.248 INFO:teuthology.orchestra.run.vm07.stdout:/sbin/ip: stdout 192.168.123.0/24 dev eth0 proto kernel scope link src 192.168.123.107 metric 100 2026-03-10T13:12:46.251 INFO:teuthology.orchestra.run.vm07.stdout:/sbin/ip: stdout ::1 dev lo proto kernel metric 256 pref medium 2026-03-10T13:12:46.251 INFO:teuthology.orchestra.run.vm07.stdout:/sbin/ip: stdout fe80::/64 dev eth0 proto kernel metric 1024 pref medium 2026-03-10T13:12:46.253 INFO:teuthology.orchestra.run.vm07.stdout:/sbin/ip: stdout 1: lo: mtu 65536 state UNKNOWN qlen 1000 2026-03-10T13:12:46.253 INFO:teuthology.orchestra.run.vm07.stdout:/sbin/ip: stdout inet6 ::1/128 scope host 2026-03-10T13:12:46.253 INFO:teuthology.orchestra.run.vm07.stdout:/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-10T13:12:46.253 INFO:teuthology.orchestra.run.vm07.stdout:/sbin/ip: stdout 2: eth0: mtu 1500 state UP qlen 1000 2026-03-10T13:12:46.253 INFO:teuthology.orchestra.run.vm07.stdout:/sbin/ip: stdout inet6 fe80::5055:ff:fe00:7/64 scope link noprefixroute 2026-03-10T13:12:46.253 INFO:teuthology.orchestra.run.vm07.stdout:/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-10T13:12:46.253 INFO:teuthology.orchestra.run.vm07.stdout:Mon IP `192.168.123.107` is in CIDR network `192.168.123.0/24` 2026-03-10T13:12:46.253 INFO:teuthology.orchestra.run.vm07.stdout:Mon IP `192.168.123.107` is in CIDR network `192.168.123.0/24` 2026-03-10T13:12:46.253 INFO:teuthology.orchestra.run.vm07.stdout:Inferred mon public CIDR from local network configuration ['192.168.123.0/24', '192.168.123.0/24'] 2026-03-10T13:12:46.254 INFO:teuthology.orchestra.run.vm07.stdout:Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network 2026-03-10T13:12:46.254 INFO:teuthology.orchestra.run.vm07.stdout:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T13:12:47.483 INFO:teuthology.orchestra.run.vm07.stdout:/bin/podman: stdout 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c 2026-03-10T13:12:47.483 INFO:teuthology.orchestra.run.vm07.stdout:/bin/podman: stderr Trying to pull quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T13:12:47.483 INFO:teuthology.orchestra.run.vm07.stdout:/bin/podman: stderr Getting image source signatures 2026-03-10T13:12:47.483 INFO:teuthology.orchestra.run.vm07.stdout:/bin/podman: stderr Copying blob sha256:8e380faede39ebd4286247457b408d979ab568aafd8389c42ec304b8cfba4e92 2026-03-10T13:12:47.483 INFO:teuthology.orchestra.run.vm07.stdout:/bin/podman: stderr Copying blob sha256:1752b8d01aa0dd33bbe0ab24e8316174c94fbdcd5d26252e2680bba0624747a7 2026-03-10T13:12:47.483 INFO:teuthology.orchestra.run.vm07.stdout:/bin/podman: stderr Copying config sha256:654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c 2026-03-10T13:12:47.483 INFO:teuthology.orchestra.run.vm07.stdout:/bin/podman: stderr Writing manifest to image destination 2026-03-10T13:12:47.746 INFO:teuthology.orchestra.run.vm07.stdout:ceph: stdout ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-10T13:12:47.747 INFO:teuthology.orchestra.run.vm07.stdout:Ceph version: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-10T13:12:47.747 INFO:teuthology.orchestra.run.vm07.stdout:Extracting ceph user uid/gid from container image... 2026-03-10T13:12:47.965 INFO:teuthology.orchestra.run.vm07.stdout:stat: stdout 167 167 2026-03-10T13:12:47.965 INFO:teuthology.orchestra.run.vm07.stdout:Creating initial keys... 2026-03-10T13:12:48.171 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-authtool: stdout AQDQGLBpduU1AxAAl7ohBwqM54y32LrYaH/VXA== 2026-03-10T13:12:48.382 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-authtool: stdout AQDQGLBpQdrFDxAAp7yPDa3O0beN+XGcR+S98Q== 2026-03-10T13:12:48.592 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-authtool: stdout AQDQGLBpLzQBHBAAgcCK5tgZtyCcxKG/ZRk50g== 2026-03-10T13:12:48.592 INFO:teuthology.orchestra.run.vm07.stdout:Creating initial monmap... 2026-03-10T13:12:48.804 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-10T13:12:48.804 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/monmaptool: stdout setting min_mon_release = quincy 2026-03-10T13:12:48.804 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: set fsid to bd98ed20-1c82-11f1-9239-ff903ae4ee6f 2026-03-10T13:12:48.804 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-10T13:12:48.804 INFO:teuthology.orchestra.run.vm07.stdout:monmaptool for a [v2:192.168.123.107:3300,v1:192.168.123.107:6789] on /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-10T13:12:48.804 INFO:teuthology.orchestra.run.vm07.stdout:setting min_mon_release = quincy 2026-03-10T13:12:48.804 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/monmaptool: set fsid to bd98ed20-1c82-11f1-9239-ff903ae4ee6f 2026-03-10T13:12:48.804 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-10T13:12:48.804 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:12:48.804 INFO:teuthology.orchestra.run.vm07.stdout:Creating mon... 2026-03-10T13:12:49.040 INFO:teuthology.orchestra.run.vm07.stdout:create mon.a on 2026-03-10T13:12:49.183 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stderr Removed "/etc/systemd/system/multi-user.target.wants/ceph.target". 2026-03-10T13:12:49.295 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /etc/systemd/system/ceph.target. 2026-03-10T13:12:49.409 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f.target → /etc/systemd/system/ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f.target. 2026-03-10T13:12:49.409 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph.target.wants/ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f.target → /etc/systemd/system/ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f.target. 2026-03-10T13:12:49.542 INFO:teuthology.orchestra.run.vm07.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@mon.a 2026-03-10T13:12:49.542 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stderr Failed to reset failed state of unit ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@mon.a.service: Unit ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@mon.a.service not loaded. 2026-03-10T13:12:49.671 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f.target.wants/ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@mon.a.service → /etc/systemd/system/ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@.service. 2026-03-10T13:12:49.812 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:49 vm07 podman[51621]: 2026-03-10 13:12:49.778781627 +0000 UTC m=+0.015396205 container create 7d54c640351f6e3a4fe93d951c9e5acdd2acc729da11266e26da621f0116429d (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mon-a, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) 2026-03-10T13:12:49.827 INFO:teuthology.orchestra.run.vm07.stdout:firewalld does not appear to be present 2026-03-10T13:12:49.827 INFO:teuthology.orchestra.run.vm07.stdout:Not possible to enable service . firewalld.service is not available 2026-03-10T13:12:49.827 INFO:teuthology.orchestra.run.vm07.stdout:Waiting for mon to start... 2026-03-10T13:12:49.827 INFO:teuthology.orchestra.run.vm07.stdout:Waiting for mon... 2026-03-10T13:12:50.091 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:49 vm07 podman[51621]: 2026-03-10 13:12:49.816662721 +0000 UTC m=+0.053277299 container init 7d54c640351f6e3a4fe93d951c9e5acdd2acc729da11266e26da621f0116429d (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mon-a, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True) 2026-03-10T13:12:50.091 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:49 vm07 podman[51621]: 2026-03-10 13:12:49.820928485 +0000 UTC m=+0.057543063 container start 7d54c640351f6e3a4fe93d951c9e5acdd2acc729da11266e26da621f0116429d (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mon-a, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-10T13:12:50.091 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:49 vm07 bash[51621]: 7d54c640351f6e3a4fe93d951c9e5acdd2acc729da11266e26da621f0116429d 2026-03-10T13:12:50.091 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:49 vm07 podman[51621]: 2026-03-10 13:12:49.772264851 +0000 UTC m=+0.008879429 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T13:12:50.091 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:49 vm07 systemd[1]: Started Ceph mon.a for bd98ed20-1c82-11f1-9239-ff903ae4ee6f. 2026-03-10T13:12:50.091 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:49 vm07 ceph-mon[51656]: mkfs bd98ed20-1c82-11f1-9239-ff903ae4ee6f 2026-03-10T13:12:50.091 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:49 vm07 ceph-mon[51656]: mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T13:12:50.131 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout cluster: 2026-03-10T13:12:50.131 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout id: bd98ed20-1c82-11f1-9239-ff903ae4ee6f 2026-03-10T13:12:50.131 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout health: HEALTH_OK 2026-03-10T13:12:50.131 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout 2026-03-10T13:12:50.131 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout services: 2026-03-10T13:12:50.131 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout mon: 1 daemons, quorum a (age 0.129493s) 2026-03-10T13:12:50.131 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout mgr: no daemons active 2026-03-10T13:12:50.131 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout osd: 0 osds: 0 up, 0 in 2026-03-10T13:12:50.131 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout 2026-03-10T13:12:50.131 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout data: 2026-03-10T13:12:50.131 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout pools: 0 pools, 0 pgs 2026-03-10T13:12:50.131 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout objects: 0 objects, 0 B 2026-03-10T13:12:50.131 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout usage: 0 B used, 0 B / 0 B avail 2026-03-10T13:12:50.131 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout pgs: 2026-03-10T13:12:50.131 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout 2026-03-10T13:12:50.131 INFO:teuthology.orchestra.run.vm07.stdout:mon is available 2026-03-10T13:12:50.131 INFO:teuthology.orchestra.run.vm07.stdout:Assimilating anything we can from ceph.conf... 2026-03-10T13:12:50.435 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout 2026-03-10T13:12:50.435 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout [global] 2026-03-10T13:12:50.435 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout fsid = bd98ed20-1c82-11f1-9239-ff903ae4ee6f 2026-03-10T13:12:50.435 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-10T13:12:50.435 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.107:3300,v1:192.168.123.107:6789] 2026-03-10T13:12:50.435 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-10T13:12:50.435 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-10T13:12:50.435 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-10T13:12:50.435 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-10T13:12:50.435 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout 2026-03-10T13:12:50.435 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-10T13:12:50.435 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout mgr/cephadm/use_agent = True 2026-03-10T13:12:50.435 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-10T13:12:50.435 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout 2026-03-10T13:12:50.435 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout [osd] 2026-03-10T13:12:50.436 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-10T13:12:50.436 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-10T13:12:50.436 INFO:teuthology.orchestra.run.vm07.stdout:Generating new minimal ceph.conf... 2026-03-10T13:12:50.714 INFO:teuthology.orchestra.run.vm07.stdout:Restarting the monitor... 2026-03-10T13:12:50.977 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:50 vm07 systemd[1]: Stopping Ceph mon.a for bd98ed20-1c82-11f1-9239-ff903ae4ee6f... 2026-03-10T13:12:50.977 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:50 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mon-a[51631]: 2026-03-10T13:12:50.793+0000 7f9622035640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T13:12:50.977 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:50 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mon-a[51631]: 2026-03-10T13:12:50.793+0000 7f9622035640 -1 mon.a@0(leader) e1 *** Got Signal Terminated *** 2026-03-10T13:12:50.977 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:50 vm07 podman[51932]: 2026-03-10 13:12:50.81656843 +0000 UTC m=+0.037463263 container died 7d54c640351f6e3a4fe93d951c9e5acdd2acc729da11266e26da621f0116429d (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mon-a, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS) 2026-03-10T13:12:50.977 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:50 vm07 podman[51932]: 2026-03-10 13:12:50.93028011 +0000 UTC m=+0.151174943 container remove 7d54c640351f6e3a4fe93d951c9e5acdd2acc729da11266e26da621f0116429d (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mon-a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-10T13:12:50.977 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:50 vm07 bash[51932]: ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mon-a 2026-03-10T13:12:51.107 INFO:teuthology.orchestra.run.vm07.stdout:Setting public_network to 192.168.123.0/24 in mon config section 2026-03-10T13:12:51.312 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:50 vm07 systemd[1]: ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@mon.a.service: Deactivated successfully. 2026-03-10T13:12:51.312 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:50 vm07 systemd[1]: Stopped Ceph mon.a for bd98ed20-1c82-11f1-9239-ff903ae4ee6f. 2026-03-10T13:12:51.312 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:50 vm07 systemd[1]: Starting Ceph mon.a for bd98ed20-1c82-11f1-9239-ff903ae4ee6f... 2026-03-10T13:12:51.312 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 podman[52011]: 2026-03-10 13:12:51.067059441 +0000 UTC m=+0.015255953 container create ac917e44bc18e080c0ad065518ab082b8e9735d392976d8d37b1e29d7aee2fef (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mon-a, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-10T13:12:51.312 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 podman[52011]: 2026-03-10 13:12:51.098529867 +0000 UTC m=+0.046726369 container init ac917e44bc18e080c0ad065518ab082b8e9735d392976d8d37b1e29d7aee2fef (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mon-a, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223) 2026-03-10T13:12:51.312 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 podman[52011]: 2026-03-10 13:12:51.102065354 +0000 UTC m=+0.050261866 container start ac917e44bc18e080c0ad065518ab082b8e9735d392976d8d37b1e29d7aee2fef (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mon-a, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-10T13:12:51.312 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 bash[52011]: ac917e44bc18e080c0ad065518ab082b8e9735d392976d8d37b1e29d7aee2fef 2026-03-10T13:12:51.312 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 podman[52011]: 2026-03-10 13:12:51.060627152 +0000 UTC m=+0.008823664 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T13:12:51.312 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 systemd[1]: Started Ceph mon.a for bd98ed20-1c82-11f1-9239-ff903ae4ee6f. 2026-03-10T13:12:51.312 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: set uid:gid to 167:167 (ceph:ceph) 2026-03-10T13:12:51.312 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 7 2026-03-10T13:12:51.312 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: pidfile_write: ignore empty --pid-file 2026-03-10T13:12:51.312 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: load: jerasure load: lrc 2026-03-10T13:12:51.312 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: RocksDB version: 7.9.2 2026-03-10T13:12:51.312 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Git sha 0 2026-03-10T13:12:51.312 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Compile date 2026-02-25 18:11:04 2026-03-10T13:12:51.312 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: DB SUMMARY 2026-03-10T13:12:51.312 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: DB Session ID: GWK0V0GLV69CB58S5X3F 2026-03-10T13:12:51.312 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: CURRENT file: CURRENT 2026-03-10T13:12:51.312 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: IDENTITY file: IDENTITY 2026-03-10T13:12:51.312 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: MANIFEST file: MANIFEST-000010 size: 179 Bytes 2026-03-10T13:12:51.312 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 1, files: 000008.sst 2026-03-10T13:12:51.312 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 000009.log size: 75535 ; 2026-03-10T13:12:51.312 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.error_if_exists: 0 2026-03-10T13:12:51.312 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.create_if_missing: 0 2026-03-10T13:12:51.312 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.paranoid_checks: 1 2026-03-10T13:12:51.312 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.flush_verify_memtable_count: 1 2026-03-10T13:12:51.312 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T13:12:51.312 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-10T13:12:51.312 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.env: 0x56330abb7dc0 2026-03-10T13:12:51.312 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.fs: PosixFileSystem 2026-03-10T13:12:51.312 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.info_log: 0x56330b74c700 2026-03-10T13:12:51.312 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.max_file_opening_threads: 16 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.statistics: (nil) 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.use_fsync: 0 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.max_log_file_size: 0 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.keep_log_file_num: 1000 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.recycle_log_file_num: 0 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.allow_fallocate: 1 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.allow_mmap_reads: 0 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.allow_mmap_writes: 0 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.use_direct_reads: 0 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.create_missing_column_families: 0 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.db_log_dir: 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.wal_dir: 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.advise_random_on_open: 1 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.db_write_buffer_size: 0 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.write_buffer_manager: 0x56330b751900 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.rate_limiter: (nil) 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.wal_recovery_mode: 2 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.enable_thread_tracking: 0 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.enable_pipelined_write: 0 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.unordered_write: 0 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.row_cache: None 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.wal_filter: None 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.allow_ingest_behind: 0 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.two_write_queues: 0 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.manual_wal_flush: 0 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.wal_compression: 0 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.atomic_flush: 0 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.log_readahead_size: 0 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.best_efforts_recovery: 0 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.allow_data_in_errors: 0 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.db_host_id: __hostname__ 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.enforce_single_del_contracts: true 2026-03-10T13:12:51.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.max_background_jobs: 2 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.max_background_compactions: -1 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.max_subcompactions: 1 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.max_total_wal_size: 0 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.max_open_files: -1 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.bytes_per_sync: 0 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.compaction_readahead_size: 0 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.max_background_flushes: -1 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Compression algorithms supported: 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: kZSTD supported: 0 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: kXpressCompression supported: 0 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: kBZip2Compression supported: 0 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: kLZ4Compression supported: 1 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: kZlibCompression supported: 1 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: kLZ4HCCompression supported: 1 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: kSnappyCompression supported: 1 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: DMutex implementation: pthread_mutex_t 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.merge_operator: 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.compaction_filter: None 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.compaction_filter_factory: None 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.sst_partitioner_factory: None 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56330b74c640) 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout: cache_index_and_filter_blocks: 1 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout: pin_top_level_index_and_filter: 1 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout: index_type: 0 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout: data_block_index_type: 0 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout: index_shortening: 1 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout: data_block_hash_table_util_ratio: 0.750000 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout: checksum: 4 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout: no_block_cache: 0 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout: block_cache: 0x56330b771350 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout: block_cache_name: BinnedLRUCache 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout: block_cache_options: 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout: capacity : 536870912 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout: num_shard_bits : 4 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout: strict_capacity_limit : 0 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout: high_pri_pool_ratio: 0.000 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout: block_cache_compressed: (nil) 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout: persistent_cache: (nil) 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout: block_size: 4096 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout: block_size_deviation: 10 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout: block_restart_interval: 16 2026-03-10T13:12:51.314 INFO:journalctl@ceph.mon.a.vm07.stdout: index_block_restart_interval: 1 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout: metadata_block_size: 4096 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout: partition_filters: 0 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout: use_delta_encoding: 1 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout: filter_policy: bloomfilter 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout: whole_key_filtering: 1 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout: verify_compression: 0 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout: read_amp_bytes_per_bit: 0 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout: format_version: 5 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout: enable_index_compression: 1 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout: block_align: 0 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout: max_auto_readahead_size: 262144 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout: prepopulate_block_cache: 0 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout: initial_auto_readahead_size: 8192 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout: num_file_reads_for_auto_readahead: 2 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.write_buffer_size: 33554432 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.max_write_buffer_number: 2 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.compression: NoCompression 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.bottommost_compression: Disabled 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.prefix_extractor: nullptr 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.num_levels: 7 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.compression_opts.level: 32767 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.compression_opts.strategy: 0 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.compression_opts.enabled: false 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.target_file_size_base: 67108864 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.arena_block_size: 1048576 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T13:12:51.315 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.disable_auto_compactions: 0 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.inplace_update_support: 0 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.bloom_locality: 0 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.max_successive_merges: 0 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.paranoid_file_checks: 0 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.force_consistency_checks: 1 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.report_bg_io_stats: 0 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.ttl: 2592000 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.enable_blob_files: false 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.min_blob_size: 0 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.blob_file_size: 268435456 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.blob_file_starting_level: 0 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: dd37726c-664e-495c-9519-ee2266d61b1c 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773148371128010, "job": 1, "event": "recovery_started", "wal_files": [9]} 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773148371130220, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 72616, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 225, "table_properties": {"data_size": 70895, "index_size": 174, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 517, "raw_key_size": 9705, "raw_average_key_size": 49, "raw_value_size": 65374, "raw_average_value_size": 333, "num_data_blocks": 8, "num_entries": 196, "num_filter_entries": 196, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773148371, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dd37726c-664e-495c-9519-ee2266d61b1c", "db_session_id": "GWK0V0GLV69CB58S5X3F", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}} 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773148371130272, "job": 1, "event": "recovery_finished"} 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: [db/version_set.cc:5047] Creating manifest 15 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-a/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56330b772e00 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: rocksdb: DB pointer 0x56330b888000 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: starting mon.a rank 0 at public addrs [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] at bind addrs [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon_data /var/lib/ceph/mon/ceph-a fsid bd98ed20-1c82-11f1-9239-ff903ae4ee6f 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: mon.a@-1(???) e1 preinit fsid bd98ed20-1c82-11f1-9239-ff903ae4ee6f 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: mon.a@-1(???).mds e1 new map 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: mon.a@-1(???).mds e1 print_map 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout: e1 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout: btime 2026-03-10T13:12:49:860456+0000 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout: enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout: legacy client fscid: -1 2026-03-10T13:12:51.316 INFO:journalctl@ceph.mon.a.vm07.stdout: 2026-03-10T13:12:51.317 INFO:journalctl@ceph.mon.a.vm07.stdout: No filesystems configured 2026-03-10T13:12:51.317 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: mon.a@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-10T13:12:51.317 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T13:12:51.317 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T13:12:51.317 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T13:12:51.317 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: mon.a@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3 2026-03-10T13:12:51.317 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T13:12:51.317 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: monmap epoch 1 2026-03-10T13:12:51.317 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: fsid bd98ed20-1c82-11f1-9239-ff903ae4ee6f 2026-03-10T13:12:51.317 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: last_changed 2026-03-10T13:12:48.686420+0000 2026-03-10T13:12:51.317 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: created 2026-03-10T13:12:48.686420+0000 2026-03-10T13:12:51.317 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: min_mon_release 19 (squid) 2026-03-10T13:12:51.317 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: election_strategy: 1 2026-03-10T13:12:51.317 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: 0: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.a 2026-03-10T13:12:51.317 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: fsmap 2026-03-10T13:12:51.317 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: osdmap e1: 0 total, 0 up, 0 in 2026-03-10T13:12:51.317 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-mon[52048]: mgrmap e1: no daemons active 2026-03-10T13:12:51.456 INFO:teuthology.orchestra.run.vm07.stdout:Wrote config to /etc/ceph/ceph.conf 2026-03-10T13:12:51.456 INFO:teuthology.orchestra.run.vm07.stdout:Wrote keyring to /etc/ceph/ceph.client.admin.keyring 2026-03-10T13:12:51.456 INFO:teuthology.orchestra.run.vm07.stdout:Creating mgr... 2026-03-10T13:12:51.457 INFO:teuthology.orchestra.run.vm07.stdout:Verifying port 0.0.0.0:9283 ... 2026-03-10T13:12:51.457 INFO:teuthology.orchestra.run.vm07.stdout:Verifying port 0.0.0.0:8765 ... 2026-03-10T13:12:51.599 INFO:teuthology.orchestra.run.vm07.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@mgr.a 2026-03-10T13:12:51.599 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stderr Failed to reset failed state of unit ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@mgr.a.service: Unit ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@mgr.a.service not loaded. 2026-03-10T13:12:51.722 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f.target.wants/ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@mgr.a.service → /etc/systemd/system/ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@.service. 2026-03-10T13:12:51.869 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:51 vm07 podman[52273]: 2026-03-10 13:12:51.870014199 +0000 UTC m=+0.060877272 container init 7915ba879fdfc0daa333ca484a93017c6427175e64ec07dbc91fd9557105e62b (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20260223) 2026-03-10T13:12:51.884 INFO:teuthology.orchestra.run.vm07.stdout:firewalld does not appear to be present 2026-03-10T13:12:51.884 INFO:teuthology.orchestra.run.vm07.stdout:Not possible to enable service . firewalld.service is not available 2026-03-10T13:12:51.884 INFO:teuthology.orchestra.run.vm07.stdout:firewalld does not appear to be present 2026-03-10T13:12:51.884 INFO:teuthology.orchestra.run.vm07.stdout:Not possible to open ports <[9283, 8765]>. firewalld.service is not available 2026-03-10T13:12:51.884 INFO:teuthology.orchestra.run.vm07.stdout:Waiting for mgr to start... 2026-03-10T13:12:51.884 INFO:teuthology.orchestra.run.vm07.stdout:Waiting for mgr... 2026-03-10T13:12:52.183 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:51 vm07 podman[52273]: 2026-03-10 13:12:51.874273571 +0000 UTC m=+0.065136655 container start 7915ba879fdfc0daa333ca484a93017c6427175e64ec07dbc91fd9557105e62b (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a, OSD_FLAVOR=default, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , ceph=True) 2026-03-10T13:12:52.183 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:51 vm07 bash[52273]: 7915ba879fdfc0daa333ca484a93017c6427175e64ec07dbc91fd9557105e62b 2026-03-10T13:12:52.183 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:51 vm07 podman[52273]: 2026-03-10 13:12:51.818263257 +0000 UTC m=+0.009126341 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T13:12:52.183 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:51 vm07 systemd[1]: Started Ceph mgr.a for bd98ed20-1c82-11f1-9239-ff903ae4ee6f. 2026-03-10T13:12:52.183 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:51 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:12:51.989+0000 7fabee3f7140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T13:12:52.183 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:52 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:12:52.030+0000 7fabee3f7140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T13:12:52.192 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout 2026-03-10T13:12:52.192 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout { 2026-03-10T13:12:52.192 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "fsid": "bd98ed20-1c82-11f1-9239-ff903ae4ee6f", 2026-03-10T13:12:52.192 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T13:12:52.192 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T13:12:52.192 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T13:12:52.192 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T13:12:52.192 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:12:52.192 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T13:12:52.192 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T13:12:52.192 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout 0 2026-03-10T13:12:52.192 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout ], 2026-03-10T13:12:52.192 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T13:12:52.192 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "a" 2026-03-10T13:12:52.192 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout ], 2026-03-10T13:12:52.192 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "quorum_age": 0, 2026-03-10T13:12:52.192 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T13:12:52.192 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T13:12:52.192 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T13:12:52.192 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T13:12:52.192 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:12:52.192 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T13:12:52.192 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T13:12:52.192 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T13:12:52.192 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T13:12:52.192 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T13:12:52.192 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T13:12:52.192 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T13:12:52.192 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T13:12:52.192 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:12:52.192 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T13:12:52.192 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T13:12:52.192 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T13:12:52.192 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T13:12:52.192 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T13:12:52.192 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T13:12:52.192 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T13:12:52.192 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T13:12:52.192 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T13:12:52.192 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:12:52.192 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T13:12:52.192 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T13:12:52.193 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T13:12:49:860456+0000", 2026-03-10T13:12:52.193 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T13:12:52.193 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T13:12:52.193 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:12:52.193 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T13:12:52.193 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-10T13:12:52.193 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T13:12:52.193 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T13:12:52.193 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T13:12:52.193 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T13:12:52.193 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T13:12:52.193 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout ], 2026-03-10T13:12:52.193 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T13:12:52.193 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:12:52.193 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T13:12:52.193 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T13:12:52.193 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T13:12:49.861010+0000", 2026-03-10T13:12:52.193 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T13:12:52.193 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:12:52.193 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T13:12:52.193 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout } 2026-03-10T13:12:52.193 INFO:teuthology.orchestra.run.vm07.stdout:mgr not available, waiting (1/15)... 2026-03-10T13:12:52.590 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:52 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/3977971859' entity='client.admin' 2026-03-10T13:12:52.590 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:52 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/997959739' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T13:12:52.591 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:52 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:12:52.412+0000 7fabee3f7140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T13:12:53.090 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:52 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:12:52.707+0000 7fabee3f7140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T13:12:53.091 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:52 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T13:12:53.091 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:52 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T13:12:53.091 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:52 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: from numpy import show_config as show_numpy_config 2026-03-10T13:12:53.091 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:52 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:12:52.789+0000 7fabee3f7140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T13:12:53.091 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:52 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:12:52.823+0000 7fabee3f7140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T13:12:53.091 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:52 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:12:52.888+0000 7fabee3f7140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T13:12:53.755 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:53 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:12:53.352+0000 7fabee3f7140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T13:12:53.755 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:53 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:12:53.456+0000 7fabee3f7140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T13:12:53.755 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:53 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:12:53.493+0000 7fabee3f7140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T13:12:53.755 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:53 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:12:53.525+0000 7fabee3f7140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T13:12:53.755 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:53 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:12:53.564+0000 7fabee3f7140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T13:12:53.755 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:53 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:12:53.599+0000 7fabee3f7140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T13:12:54.015 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:53 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:12:53.754+0000 7fabee3f7140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T13:12:54.015 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:53 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:12:53.802+0000 7fabee3f7140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T13:12:54.016 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:54 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:12:54.015+0000 7fabee3f7140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T13:12:54.545 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:54 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/3903887833' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T13:12:54.545 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:54 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:12:54.289+0000 7fabee3f7140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T13:12:54.545 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:54 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:12:54.326+0000 7fabee3f7140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T13:12:54.546 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:54 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:12:54.371+0000 7fabee3f7140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T13:12:54.546 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:54 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:12:54.459+0000 7fabee3f7140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T13:12:54.546 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:54 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:12:54.494+0000 7fabee3f7140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T13:12:54.548 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout 2026-03-10T13:12:54.548 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout { 2026-03-10T13:12:54.548 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "fsid": "bd98ed20-1c82-11f1-9239-ff903ae4ee6f", 2026-03-10T13:12:54.548 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T13:12:54.548 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T13:12:54.548 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T13:12:54.548 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T13:12:54.549 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:12:54.549 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T13:12:54.549 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T13:12:54.549 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout 0 2026-03-10T13:12:54.549 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout ], 2026-03-10T13:12:54.549 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T13:12:54.549 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "a" 2026-03-10T13:12:54.549 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout ], 2026-03-10T13:12:54.549 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "quorum_age": 3, 2026-03-10T13:12:54.549 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T13:12:54.549 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T13:12:54.549 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T13:12:54.549 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T13:12:54.549 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:12:54.549 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T13:12:54.549 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T13:12:54.549 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T13:12:54.549 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T13:12:54.549 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T13:12:54.549 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T13:12:54.549 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T13:12:54.549 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T13:12:54.550 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:12:54.550 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T13:12:54.550 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T13:12:54.550 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T13:12:54.550 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T13:12:54.550 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T13:12:54.550 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T13:12:54.550 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T13:12:54.550 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T13:12:54.550 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T13:12:54.550 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:12:54.550 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T13:12:54.550 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T13:12:54.550 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T13:12:49:860456+0000", 2026-03-10T13:12:54.550 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T13:12:54.550 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T13:12:54.550 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:12:54.550 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T13:12:54.550 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-10T13:12:54.550 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T13:12:54.550 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T13:12:54.550 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T13:12:54.550 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T13:12:54.550 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T13:12:54.550 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout ], 2026-03-10T13:12:54.550 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T13:12:54.550 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:12:54.550 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T13:12:54.550 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T13:12:54.550 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T13:12:49.861010+0000", 2026-03-10T13:12:54.550 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T13:12:54.550 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:12:54.550 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T13:12:54.550 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout } 2026-03-10T13:12:54.550 INFO:teuthology.orchestra.run.vm07.stdout:mgr not available, waiting (2/15)... 2026-03-10T13:12:54.840 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:54 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:12:54.575+0000 7fabee3f7140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T13:12:54.840 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:54 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:12:54.684+0000 7fabee3f7140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T13:12:55.340 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:54 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:12:54.880+0000 7fabee3f7140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T13:12:55.340 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:54 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:12:54.918+0000 7fabee3f7140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T13:12:55.840 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:55 vm07 ceph-mon[52048]: Activating manager daemon a 2026-03-10T13:12:55.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:55 vm07 ceph-mon[52048]: mgrmap e2: a(active, starting, since 0.00379601s) 2026-03-10T13:12:55.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:55 vm07 ceph-mon[52048]: from='mgr.14100 192.168.123.107:0/1474908022' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T13:12:55.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:55 vm07 ceph-mon[52048]: from='mgr.14100 192.168.123.107:0/1474908022' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T13:12:55.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:55 vm07 ceph-mon[52048]: from='mgr.14100 192.168.123.107:0/1474908022' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T13:12:55.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:55 vm07 ceph-mon[52048]: from='mgr.14100 192.168.123.107:0/1474908022' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:12:55.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:55 vm07 ceph-mon[52048]: from='mgr.14100 192.168.123.107:0/1474908022' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T13:12:55.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:55 vm07 ceph-mon[52048]: Manager daemon a is now available 2026-03-10T13:12:55.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:55 vm07 ceph-mon[52048]: from='mgr.14100 192.168.123.107:0/1474908022' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T13:12:55.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:55 vm07 ceph-mon[52048]: from='mgr.14100 192.168.123.107:0/1474908022' entity='mgr.a' 2026-03-10T13:12:55.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:55 vm07 ceph-mon[52048]: from='mgr.14100 192.168.123.107:0/1474908022' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T13:12:55.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:55 vm07 ceph-mon[52048]: from='mgr.14100 192.168.123.107:0/1474908022' entity='mgr.a' 2026-03-10T13:12:55.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:55 vm07 ceph-mon[52048]: from='mgr.14100 192.168.123.107:0/1474908022' entity='mgr.a' 2026-03-10T13:12:57.025 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout 2026-03-10T13:12:57.025 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout { 2026-03-10T13:12:57.025 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "fsid": "bd98ed20-1c82-11f1-9239-ff903ae4ee6f", 2026-03-10T13:12:57.025 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T13:12:57.025 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T13:12:57.025 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T13:12:57.025 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T13:12:57.025 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:12:57.025 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T13:12:57.025 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T13:12:57.025 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout 0 2026-03-10T13:12:57.025 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout ], 2026-03-10T13:12:57.025 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T13:12:57.025 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "a" 2026-03-10T13:12:57.025 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout ], 2026-03-10T13:12:57.025 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "quorum_age": 5, 2026-03-10T13:12:57.025 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T13:12:57.025 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T13:12:57.025 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T13:12:57.025 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T13:12:57.025 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:12:57.025 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T13:12:57.025 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T13:12:57.025 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T13:12:57.025 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T13:12:57.025 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T13:12:57.025 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T13:12:57.025 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T13:12:57.025 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T13:12:57.026 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:12:57.026 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T13:12:57.026 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T13:12:57.026 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T13:12:57.026 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T13:12:57.026 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T13:12:57.026 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T13:12:57.026 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T13:12:57.026 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T13:12:57.026 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T13:12:57.026 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:12:57.026 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T13:12:57.026 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T13:12:57.026 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T13:12:49:860456+0000", 2026-03-10T13:12:57.026 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T13:12:57.026 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T13:12:57.026 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:12:57.026 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T13:12:57.026 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T13:12:57.026 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T13:12:57.026 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T13:12:57.026 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T13:12:57.026 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T13:12:57.026 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T13:12:57.026 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout ], 2026-03-10T13:12:57.026 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T13:12:57.026 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:12:57.026 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T13:12:57.026 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T13:12:57.027 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T13:12:49.861010+0000", 2026-03-10T13:12:57.027 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T13:12:57.027 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:12:57.027 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T13:12:57.027 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout } 2026-03-10T13:12:57.027 INFO:teuthology.orchestra.run.vm07.stdout:mgr is available 2026-03-10T13:12:57.140 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:56 vm07 ceph-mon[52048]: mgrmap e3: a(active, since 1.00788s) 2026-03-10T13:12:57.140 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:56 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/2158709133' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T13:12:57.383 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout 2026-03-10T13:12:57.383 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout [global] 2026-03-10T13:12:57.383 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout fsid = bd98ed20-1c82-11f1-9239-ff903ae4ee6f 2026-03-10T13:12:57.383 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-10T13:12:57.383 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.107:3300,v1:192.168.123.107:6789] 2026-03-10T13:12:57.383 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-10T13:12:57.383 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-10T13:12:57.383 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-10T13:12:57.383 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-10T13:12:57.383 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout 2026-03-10T13:12:57.383 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-10T13:12:57.383 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-10T13:12:57.383 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout 2026-03-10T13:12:57.383 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout [osd] 2026-03-10T13:12:57.383 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-10T13:12:57.383 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-10T13:12:57.383 INFO:teuthology.orchestra.run.vm07.stdout:Enabling cephadm module... 2026-03-10T13:12:58.263 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:57 vm07 ceph-mon[52048]: mgrmap e4: a(active, since 2s) 2026-03-10T13:12:58.263 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:57 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/843217421' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T13:12:58.263 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:57 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/843217421' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished 2026-03-10T13:12:58.263 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:57 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/1316955290' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-10T13:12:58.590 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:58 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: ignoring --setuser ceph since I am not root 2026-03-10T13:12:58.590 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:58 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: ignoring --setgroup ceph since I am not root 2026-03-10T13:12:58.590 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:58 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:12:58.367+0000 7fb9c33ee140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T13:12:58.590 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:58 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:12:58.420+0000 7fb9c33ee140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T13:12:58.768 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout { 2026-03-10T13:12:58.768 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "epoch": 5, 2026-03-10T13:12:58.768 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T13:12:58.768 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "active_name": "a", 2026-03-10T13:12:58.768 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-10T13:12:58.768 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout } 2026-03-10T13:12:58.768 INFO:teuthology.orchestra.run.vm07.stdout:Waiting for the mgr to restart... 2026-03-10T13:12:58.768 INFO:teuthology.orchestra.run.vm07.stdout:Waiting for mgr epoch 5... 2026-03-10T13:12:59.090 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:58 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:12:58.816+0000 7fb9c33ee140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T13:12:59.590 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:59 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/1316955290' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-10T13:12:59.590 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:59 vm07 ceph-mon[52048]: mgrmap e5: a(active, since 3s) 2026-03-10T13:12:59.590 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:12:59 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/1513566349' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T13:12:59.591 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:59 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:12:59.125+0000 7fb9c33ee140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T13:12:59.591 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:59 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T13:12:59.591 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:59 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T13:12:59.591 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:59 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: from numpy import show_config as show_numpy_config 2026-03-10T13:12:59.591 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:59 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:12:59.204+0000 7fb9c33ee140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T13:12:59.591 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:59 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:12:59.239+0000 7fb9c33ee140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T13:12:59.591 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:59 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:12:59.303+0000 7fb9c33ee140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T13:13:00.090 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:59 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:12:59.763+0000 7fb9c33ee140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T13:13:00.091 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:59 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:12:59.868+0000 7fb9c33ee140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T13:13:00.091 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:59 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:12:59.905+0000 7fb9c33ee140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T13:13:00.091 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:59 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:12:59.937+0000 7fb9c33ee140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T13:13:00.091 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:12:59 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:12:59.974+0000 7fb9c33ee140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T13:13:00.091 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:00 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:13:00.009+0000 7fb9c33ee140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T13:13:00.421 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:00 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:13:00.166+0000 7fb9c33ee140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T13:13:00.422 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:00 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:13:00.213+0000 7fb9c33ee140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T13:13:00.687 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:00 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:13:00.421+0000 7fb9c33ee140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T13:13:00.938 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:00 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:13:00.686+0000 7fb9c33ee140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T13:13:00.939 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:00 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:13:00.722+0000 7fb9c33ee140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T13:13:00.939 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:00 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:13:00.760+0000 7fb9c33ee140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T13:13:00.939 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:00 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:13:00.831+0000 7fb9c33ee140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T13:13:00.939 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:00 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:13:00.865+0000 7fb9c33ee140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T13:13:01.203 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:00 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:13:00.938+0000 7fb9c33ee140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T13:13:01.203 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:01 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:13:01.044+0000 7fb9c33ee140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T13:13:01.203 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:01 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:13:01.169+0000 7fb9c33ee140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T13:13:01.590 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:01 vm07 ceph-mon[52048]: Active manager daemon a restarted 2026-03-10T13:13:01.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:01 vm07 ceph-mon[52048]: Activating manager daemon a 2026-03-10T13:13:01.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:01 vm07 ceph-mon[52048]: osdmap e2: 0 total, 0 up, 0 in 2026-03-10T13:13:01.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:01 vm07 ceph-mon[52048]: mgrmap e6: a(active, starting, since 0.00500736s) 2026-03-10T13:13:01.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:01 vm07 ceph-mon[52048]: from='mgr.14118 192.168.123.107:0/3029326115' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:13:01.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:01 vm07 ceph-mon[52048]: from='mgr.14118 192.168.123.107:0/3029326115' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T13:13:01.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:01 vm07 ceph-mon[52048]: from='mgr.14118 192.168.123.107:0/3029326115' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T13:13:01.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:01 vm07 ceph-mon[52048]: from='mgr.14118 192.168.123.107:0/3029326115' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T13:13:01.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:01 vm07 ceph-mon[52048]: from='mgr.14118 192.168.123.107:0/3029326115' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T13:13:01.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:01 vm07 ceph-mon[52048]: Manager daemon a is now available 2026-03-10T13:13:01.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:01 vm07 ceph-mon[52048]: from='mgr.14118 192.168.123.107:0/3029326115' entity='mgr.a' 2026-03-10T13:13:01.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:01 vm07 ceph-mon[52048]: from='mgr.14118 192.168.123.107:0/3029326115' entity='mgr.a' 2026-03-10T13:13:01.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:01 vm07 ceph-mon[52048]: from='mgr.14118 192.168.123.107:0/3029326115' entity='mgr.a' 2026-03-10T13:13:01.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:01 vm07 ceph-mon[52048]: from='mgr.14118 192.168.123.107:0/3029326115' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:13:01.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:01 vm07 ceph-mon[52048]: from='mgr.14118 192.168.123.107:0/3029326115' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:13:01.591 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:01 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:13:01.203+0000 7fb9c33ee140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T13:13:02.350 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout { 2026-03-10T13:13:02.350 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 7, 2026-03-10T13:13:02.350 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-10T13:13:02.350 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout } 2026-03-10T13:13:02.350 INFO:teuthology.orchestra.run.vm07.stdout:mgr epoch 5 is available 2026-03-10T13:13:02.350 INFO:teuthology.orchestra.run.vm07.stdout:Setting orchestrator backend to cephadm... 2026-03-10T13:13:02.590 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:02 vm07 ceph-mon[52048]: Found migration_current of "None". Setting to last migration. 2026-03-10T13:13:02.590 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:02 vm07 ceph-mon[52048]: from='mgr.14118 192.168.123.107:0/3029326115' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T13:13:02.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:02 vm07 ceph-mon[52048]: from='mgr.14118 192.168.123.107:0/3029326115' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T13:13:02.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:02 vm07 ceph-mon[52048]: mgrmap e7: a(active, since 1.00759s) 2026-03-10T13:13:03.114 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout value unchanged 2026-03-10T13:13:03.114 INFO:teuthology.orchestra.run.vm07.stdout:Generating ssh key... 2026-03-10T13:13:03.512 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:03 vm07 ceph-mon[52048]: from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T13:13:03.512 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:03 vm07 ceph-mon[52048]: from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T13:13:03.512 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:03 vm07 ceph-mon[52048]: from='mgr.14118 192.168.123.107:0/3029326115' entity='mgr.a' 2026-03-10T13:13:03.512 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:03 vm07 ceph-mon[52048]: from='mgr.14118 192.168.123.107:0/3029326115' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:13:03.512 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:03 vm07 ceph-mon[52048]: from='mgr.14118 192.168.123.107:0/3029326115' entity='mgr.a' 2026-03-10T13:13:03.512 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:03 vm07 ceph-mon[52048]: from='mgr.14118 192.168.123.107:0/3029326115' entity='mgr.a' 2026-03-10T13:13:03.512 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:03 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: Generating public/private ed25519 key pair. 2026-03-10T13:13:03.512 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:03 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: Your identification has been saved in /tmp/tmpzcxhqbad/key 2026-03-10T13:13:03.512 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:03 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: Your public key has been saved in /tmp/tmpzcxhqbad/key.pub 2026-03-10T13:13:03.512 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:03 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: The key fingerprint is: 2026-03-10T13:13:03.512 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:03 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: SHA256:WVQIg2Dw+6J0XIkbwnWcHtreTU1bOItqqWhfancaMR8 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f 2026-03-10T13:13:03.512 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:03 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: The key's randomart image is: 2026-03-10T13:13:03.512 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:03 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: +--[ED25519 256]--+ 2026-03-10T13:13:03.512 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:03 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: | ..o. .o..o. | 2026-03-10T13:13:03.512 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:03 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: | o ... o. . | 2026-03-10T13:13:03.512 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:03 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: | o = . + . | 2026-03-10T13:13:03.512 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:03 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: | . . B o o + = | 2026-03-10T13:13:03.512 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:03 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: | o = = S E + | 2026-03-10T13:13:03.512 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:03 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: | o * . O . | 2026-03-10T13:13:03.512 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:03 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: | . = o B o | 2026-03-10T13:13:03.512 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:03 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: | . o.o.=... | 2026-03-10T13:13:03.512 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:03 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: | ...o+..o | 2026-03-10T13:13:03.512 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:03 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: +----[SHA256]-----+ 2026-03-10T13:13:03.888 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB9TgsjFRDUakgKmYF66zyPbdnxs4+c/nJieGcPcZhVL ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f 2026-03-10T13:13:03.888 INFO:teuthology.orchestra.run.vm07.stdout:Wrote public SSH key to /home/ubuntu/cephtest/ceph.pub 2026-03-10T13:13:03.888 INFO:teuthology.orchestra.run.vm07.stdout:Adding key to root@localhost authorized_keys... 2026-03-10T13:13:03.888 INFO:teuthology.orchestra.run.vm07.stdout:Adding host vm07... 2026-03-10T13:13:04.427 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:04 vm07 ceph-mon[52048]: from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:13:04.428 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:04 vm07 ceph-mon[52048]: from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:13:04.428 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:04 vm07 ceph-mon[52048]: from='mgr.14118 192.168.123.107:0/3029326115' entity='mgr.a' 2026-03-10T13:13:04.428 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:04 vm07 ceph-mon[52048]: from='mgr.14118 192.168.123.107:0/3029326115' entity='mgr.a' 2026-03-10T13:13:04.428 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:04 vm07 ceph-mon[52048]: mgrmap e8: a(active, since 2s) 2026-03-10T13:13:04.428 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:04 vm07 ceph-mon[52048]: from='mgr.14118 192.168.123.107:0/3029326115' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:13:05.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:05 vm07 ceph-mon[52048]: from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:13:05.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:05 vm07 ceph-mon[52048]: Generating ssh key... 2026-03-10T13:13:05.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:05 vm07 ceph-mon[52048]: [10/Mar/2026:13:13:03] ENGINE Bus STARTING 2026-03-10T13:13:05.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:05 vm07 ceph-mon[52048]: [10/Mar/2026:13:13:03] ENGINE Serving on https://192.168.123.107:7150 2026-03-10T13:13:05.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:05 vm07 ceph-mon[52048]: [10/Mar/2026:13:13:03] ENGINE Client ('192.168.123.107', 52072) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T13:13:05.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:05 vm07 ceph-mon[52048]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:13:05.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:05 vm07 ceph-mon[52048]: [10/Mar/2026:13:13:03] ENGINE Serving on http://192.168.123.107:8765 2026-03-10T13:13:05.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:05 vm07 ceph-mon[52048]: [10/Mar/2026:13:13:03] ENGINE Bus STARTED 2026-03-10T13:13:05.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:05 vm07 ceph-mon[52048]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm07", "addr": "192.168.123.107", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:13:05.730 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout Added host 'vm07' with addr '192.168.123.107' 2026-03-10T13:13:05.731 INFO:teuthology.orchestra.run.vm07.stdout:Deploying unmanaged mon service... 2026-03-10T13:13:06.108 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout Scheduled mon update... 2026-03-10T13:13:06.109 INFO:teuthology.orchestra.run.vm07.stdout:Deploying unmanaged mgr service... 2026-03-10T13:13:06.364 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:06 vm07 ceph-mon[52048]: Deploying cephadm binary to vm07 2026-03-10T13:13:06.364 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:06 vm07 ceph-mon[52048]: from='mgr.14118 192.168.123.107:0/3029326115' entity='mgr.a' 2026-03-10T13:13:06.364 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:06 vm07 ceph-mon[52048]: from='mgr.14118 192.168.123.107:0/3029326115' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:13:06.364 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:06 vm07 ceph-mon[52048]: from='mgr.14118 192.168.123.107:0/3029326115' entity='mgr.a' 2026-03-10T13:13:06.485 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout Scheduled mgr update... 2026-03-10T13:13:07.238 INFO:teuthology.orchestra.run.vm07.stdout:Enabling the dashboard module... 2026-03-10T13:13:07.590 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:07 vm07 ceph-mon[52048]: Added host vm07 2026-03-10T13:13:07.590 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:07 vm07 ceph-mon[52048]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:13:07.590 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:07 vm07 ceph-mon[52048]: Saving service mon spec with placement count:5 2026-03-10T13:13:07.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:07 vm07 ceph-mon[52048]: from='mgr.14118 192.168.123.107:0/3029326115' entity='mgr.a' 2026-03-10T13:13:07.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:07 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/3620138539' entity='client.admin' 2026-03-10T13:13:07.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:07 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/3678490066' entity='client.admin' 2026-03-10T13:13:08.536 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:08 vm07 ceph-mon[52048]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:13:08.536 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:08 vm07 ceph-mon[52048]: Saving service mgr spec with placement count:2 2026-03-10T13:13:08.536 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:08 vm07 ceph-mon[52048]: from='mgr.14118 192.168.123.107:0/3029326115' entity='mgr.a' 2026-03-10T13:13:08.536 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:08 vm07 ceph-mon[52048]: from='mgr.14118 192.168.123.107:0/3029326115' entity='mgr.a' 2026-03-10T13:13:08.536 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:08 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/3740913303' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-10T13:13:08.536 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:08 vm07 ceph-mon[52048]: from='mgr.14118 192.168.123.107:0/3029326115' entity='mgr.a' 2026-03-10T13:13:08.536 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:08 vm07 ceph-mon[52048]: from='mgr.14118 192.168.123.107:0/3029326115' entity='mgr.a' 2026-03-10T13:13:08.536 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:08 vm07 ceph-mon[52048]: from='mgr.14118 192.168.123.107:0/3029326115' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:13:08.536 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:08 vm07 ceph-mon[52048]: from='mgr.14118 192.168.123.107:0/3029326115' entity='mgr.a' 2026-03-10T13:13:08.536 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:08 vm07 ceph-mon[52048]: from='mgr.14118 192.168.123.107:0/3029326115' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "client.agent.vm07", "caps": []}]: dispatch 2026-03-10T13:13:08.536 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:08 vm07 ceph-mon[52048]: from='mgr.14118 192.168.123.107:0/3029326115' entity='mgr.a' cmd='[{"prefix": "auth get-or-create", "entity": "client.agent.vm07", "caps": []}]': finished 2026-03-10T13:13:08.841 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:08 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: ignoring --setuser ceph since I am not root 2026-03-10T13:13:08.841 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:08 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: ignoring --setgroup ceph since I am not root 2026-03-10T13:13:08.841 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:08 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:13:08.607+0000 7f698e9da140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T13:13:08.841 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:08 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:13:08.654+0000 7f698e9da140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T13:13:09.009 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout { 2026-03-10T13:13:09.009 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "epoch": 9, 2026-03-10T13:13:09.009 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T13:13:09.009 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "active_name": "a", 2026-03-10T13:13:09.009 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-10T13:13:09.009 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout } 2026-03-10T13:13:09.009 INFO:teuthology.orchestra.run.vm07.stdout:Waiting for the mgr to restart... 2026-03-10T13:13:09.009 INFO:teuthology.orchestra.run.vm07.stdout:Waiting for mgr epoch 9... 2026-03-10T13:13:09.340 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:09 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:13:09.051+0000 7f698e9da140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T13:13:09.840 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:09 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/3740913303' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-10T13:13:09.840 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:09 vm07 ceph-mon[52048]: mgrmap e9: a(active, since 7s) 2026-03-10T13:13:09.840 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:09 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/1399130692' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T13:13:09.840 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:09 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:13:09.360+0000 7f698e9da140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T13:13:09.840 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:09 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T13:13:09.841 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:09 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T13:13:09.841 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:09 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: from numpy import show_config as show_numpy_config 2026-03-10T13:13:09.841 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:09 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:13:09.441+0000 7f698e9da140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T13:13:09.841 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:09 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:13:09.477+0000 7f698e9da140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T13:13:09.841 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:09 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:13:09.542+0000 7f698e9da140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T13:13:10.340 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:09 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:13:09.997+0000 7f698e9da140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T13:13:10.341 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:10 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:13:10.098+0000 7f698e9da140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T13:13:10.341 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:10 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:13:10.133+0000 7f698e9da140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T13:13:10.341 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:10 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:13:10.164+0000 7f698e9da140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T13:13:10.341 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:10 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:13:10.201+0000 7f698e9da140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T13:13:10.341 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:10 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:13:10.235+0000 7f698e9da140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T13:13:10.840 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:10 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:13:10.388+0000 7f698e9da140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T13:13:10.840 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:10 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:13:10.434+0000 7f698e9da140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T13:13:10.841 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:10 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:13:10.633+0000 7f698e9da140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T13:13:11.252 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:10 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:13:10.890+0000 7f698e9da140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T13:13:11.252 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:10 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:13:10.924+0000 7f698e9da140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T13:13:11.252 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:10 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:13:10.961+0000 7f698e9da140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T13:13:11.252 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:11 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:13:11.033+0000 7f698e9da140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T13:13:11.252 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:11 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:13:11.067+0000 7f698e9da140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T13:13:11.252 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:11 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:13:11.140+0000 7f698e9da140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T13:13:11.547 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:11 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:13:11.251+0000 7f698e9da140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T13:13:11.547 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:11 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:13:11.384+0000 7f698e9da140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T13:13:11.547 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:13:11 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[52284]: 2026-03-10T13:13:11.419+0000 7f698e9da140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T13:13:11.842 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:11 vm07 ceph-mon[52048]: Active manager daemon a restarted 2026-03-10T13:13:11.842 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:11 vm07 ceph-mon[52048]: Activating manager daemon a 2026-03-10T13:13:11.842 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:11 vm07 ceph-mon[52048]: osdmap e3: 0 total, 0 up, 0 in 2026-03-10T13:13:11.842 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:11 vm07 ceph-mon[52048]: mgrmap e10: a(active, starting, since 0.0922671s) 2026-03-10T13:13:11.842 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:11 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:13:11.842 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:11 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T13:13:11.842 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:11 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T13:13:11.842 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:11 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T13:13:11.842 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:11 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T13:13:11.842 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:11 vm07 ceph-mon[52048]: Manager daemon a is now available 2026-03-10T13:13:11.842 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:11 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:11.842 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:11 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:13:11.842 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:11 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T13:13:12.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout { 2026-03-10T13:13:12.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 11, 2026-03-10T13:13:12.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-10T13:13:12.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout } 2026-03-10T13:13:12.657 INFO:teuthology.orchestra.run.vm07.stdout:mgr epoch 9 is available 2026-03-10T13:13:12.657 INFO:teuthology.orchestra.run.vm07.stdout:Generating a dashboard self-signed certificate... 2026-03-10T13:13:12.840 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:12 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T13:13:12.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:12 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:12.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:12 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:13:12.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:12 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:12.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:12 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "client.agent.vm07", "caps": []}]: dispatch 2026-03-10T13:13:12.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:12 vm07 ceph-mon[52048]: [10/Mar/2026:13:13:12] ENGINE Bus STARTING 2026-03-10T13:13:12.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:12 vm07 ceph-mon[52048]: Deploying daemon agent.vm07 on vm07 2026-03-10T13:13:12.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:12 vm07 ceph-mon[52048]: [10/Mar/2026:13:13:12] ENGINE Serving on https://192.168.123.107:7150 2026-03-10T13:13:12.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:12 vm07 ceph-mon[52048]: [10/Mar/2026:13:13:12] ENGINE Client ('192.168.123.107', 52098) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T13:13:12.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:12 vm07 ceph-mon[52048]: [10/Mar/2026:13:13:12] ENGINE Serving on http://192.168.123.107:8765 2026-03-10T13:13:12.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:12 vm07 ceph-mon[52048]: [10/Mar/2026:13:13:12] ENGINE Bus STARTED 2026-03-10T13:13:12.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:12 vm07 ceph-mon[52048]: mgrmap e11: a(active, since 1.0964s) 2026-03-10T13:13:12.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:12 vm07 ceph-mon[52048]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T13:13:12.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:12 vm07 ceph-mon[52048]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T13:13:13.090 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout Self-signed certificate created 2026-03-10T13:13:13.090 INFO:teuthology.orchestra.run.vm07.stdout:Creating initial admin user... 2026-03-10T13:13:13.601 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout {"username": "admin", "password": "$2b$12$pBy0PCcoosR8au3EjJBEAO/b96flbkB9nv8pMtULbgvlJQP.eYmN2", "roles": ["administrator"], "name": null, "email": null, "lastUpdate": 1773148393, "enabled": true, "pwdExpirationDate": null, "pwdUpdateRequired": true} 2026-03-10T13:13:13.601 INFO:teuthology.orchestra.run.vm07.stdout:Fetching dashboard port number... 2026-03-10T13:13:13.966 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout 8443 2026-03-10T13:13:13.966 INFO:teuthology.orchestra.run.vm07.stdout:firewalld does not appear to be present 2026-03-10T13:13:13.966 INFO:teuthology.orchestra.run.vm07.stdout:Not possible to open ports <[8443]>. firewalld.service is not available 2026-03-10T13:13:13.967 INFO:teuthology.orchestra.run.vm07.stdout:Ceph Dashboard is now available at: 2026-03-10T13:13:13.967 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:13:13.967 INFO:teuthology.orchestra.run.vm07.stdout: URL: https://vm07.local:8443/ 2026-03-10T13:13:13.967 INFO:teuthology.orchestra.run.vm07.stdout: User: admin 2026-03-10T13:13:13.967 INFO:teuthology.orchestra.run.vm07.stdout: Password: 70i01ta1mt 2026-03-10T13:13:13.967 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:13:13.967 INFO:teuthology.orchestra.run.vm07.stdout:Saving cluster configuration to /var/lib/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/config directory 2026-03-10T13:13:13.991 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:13 vm07 ceph-mon[52048]: from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:13:13.991 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:13 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:13.991 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:13 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:13.991 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:13 vm07 ceph-mon[52048]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:13:13.991 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:13 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:13.991 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:13 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/2955405815' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-10T13:13:14.382 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr set mgr/dashboard/cluster/status 2026-03-10T13:13:14.382 INFO:teuthology.orchestra.run.vm07.stdout:You can access the Ceph CLI as following in case of multi-cluster or non-default config: 2026-03-10T13:13:14.382 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:13:14.382 INFO:teuthology.orchestra.run.vm07.stdout: sudo /home/ubuntu/cephtest/cephadm shell --fsid bd98ed20-1c82-11f1-9239-ff903ae4ee6f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring 2026-03-10T13:13:14.383 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:13:14.383 INFO:teuthology.orchestra.run.vm07.stdout:Or, if you are only running a single cluster on this host: 2026-03-10T13:13:14.383 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:13:14.383 INFO:teuthology.orchestra.run.vm07.stdout: sudo /home/ubuntu/cephtest/cephadm shell 2026-03-10T13:13:14.383 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:13:14.383 INFO:teuthology.orchestra.run.vm07.stdout:Please consider enabling telemetry to help improve Ceph: 2026-03-10T13:13:14.383 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:13:14.383 INFO:teuthology.orchestra.run.vm07.stdout: ceph telemetry on 2026-03-10T13:13:14.383 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:13:14.383 INFO:teuthology.orchestra.run.vm07.stdout:For more information see: 2026-03-10T13:13:14.383 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:13:14.383 INFO:teuthology.orchestra.run.vm07.stdout: https://docs.ceph.com/en/latest/mgr/telemetry/ 2026-03-10T13:13:14.383 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:13:14.383 INFO:teuthology.orchestra.run.vm07.stdout:Bootstrap complete. 2026-03-10T13:13:14.414 INFO:tasks.cephadm:Fetching config... 2026-03-10T13:13:14.414 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-10T13:13:14.414 DEBUG:teuthology.orchestra.run.vm07:> dd if=/etc/ceph/ceph.conf of=/dev/stdout 2026-03-10T13:13:14.435 INFO:tasks.cephadm:Fetching client.admin keyring... 2026-03-10T13:13:14.435 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-10T13:13:14.435 DEBUG:teuthology.orchestra.run.vm07:> dd if=/etc/ceph/ceph.client.admin.keyring of=/dev/stdout 2026-03-10T13:13:14.492 INFO:tasks.cephadm:Fetching mon keyring... 2026-03-10T13:13:14.493 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-10T13:13:14.493 DEBUG:teuthology.orchestra.run.vm07:> sudo dd if=/var/lib/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/mon.a/keyring of=/dev/stdout 2026-03-10T13:13:14.570 INFO:tasks.cephadm:Fetching pub ssh key... 2026-03-10T13:13:14.570 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-10T13:13:14.570 DEBUG:teuthology.orchestra.run.vm07:> dd if=/home/ubuntu/cephtest/ceph.pub of=/dev/stdout 2026-03-10T13:13:14.631 INFO:tasks.cephadm:Installing pub ssh key for root users... 2026-03-10T13:13:14.631 DEBUG:teuthology.orchestra.run.vm07:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB9TgsjFRDUakgKmYF66zyPbdnxs4+c/nJieGcPcZhVL ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-10T13:13:14.742 INFO:teuthology.orchestra.run.vm07.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB9TgsjFRDUakgKmYF66zyPbdnxs4+c/nJieGcPcZhVL ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f 2026-03-10T13:13:14.752 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid bd98ed20-1c82-11f1-9239-ff903ae4ee6f -- ceph config set mgr mgr/cephadm/allow_ptrace true 2026-03-10T13:13:15.011 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/mon.a/config 2026-03-10T13:13:15.087 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:14 vm07 ceph-mon[52048]: mgrmap e12: a(active, since 2s) 2026-03-10T13:13:15.088 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:14 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/1171177618' entity='client.admin' 2026-03-10T13:13:15.645 INFO:tasks.cephadm:Distributing conf and client.admin keyring to all hosts + 0755 2026-03-10T13:13:15.646 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid bd98ed20-1c82-11f1-9239-ff903ae4ee6f -- ceph orch client-keyring set client.admin '*' --mode 0755 2026-03-10T13:13:16.004 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/mon.a/config 2026-03-10T13:13:16.457 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:16 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:16.457 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:16 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:16.457 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:16 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:16.457 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:16 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:13:16.457 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:16 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:16.457 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:16 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:16.457 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:16 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:16.458 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:16 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:16.458 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:16 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:16.458 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:16 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:13:16.458 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:16 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:16.458 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:16 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/139667897' entity='client.admin' 2026-03-10T13:13:16.458 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:16 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:13:16.458 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:16 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:16.458 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:16 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:16.458 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:16 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:16.458 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:16 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:16.491 INFO:tasks.cephadm:Setting crush tunables to default 2026-03-10T13:13:16.491 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid bd98ed20-1c82-11f1-9239-ff903ae4ee6f -- ceph osd crush tunables default 2026-03-10T13:13:16.736 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/mon.a/config 2026-03-10T13:13:17.340 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:17 vm07 ceph-mon[52048]: from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:13:17.341 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:17 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:17.341 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:17 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:13:17.341 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:17 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:13:17.341 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:17 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:13:17.341 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:17 vm07 ceph-mon[52048]: Updating vm07:/etc/ceph/ceph.conf 2026-03-10T13:13:17.341 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:17 vm07 ceph-mon[52048]: Updating vm07:/var/lib/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/config/ceph.conf 2026-03-10T13:13:17.341 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:17 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:17.341 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:17 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/4245609081' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-10T13:13:17.341 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:17 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:17.341 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:17 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:17.341 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:17 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:17.341 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:17 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:17.341 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:17 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:17.547 INFO:teuthology.orchestra.run.vm07.stderr:adjusted tunables profile to default 2026-03-10T13:13:17.694 INFO:tasks.cephadm:Adding mon.a on vm07 2026-03-10T13:13:17.695 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid bd98ed20-1c82-11f1-9239-ff903ae4ee6f -- ceph orch apply mon '1;vm07:192.168.123.107=a' 2026-03-10T13:13:17.850 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/mon.a/config 2026-03-10T13:13:18.079 INFO:teuthology.orchestra.run.vm07.stdout:Scheduled mon update... 2026-03-10T13:13:18.233 INFO:tasks.cephadm:Waiting for 1 mons in monmap... 2026-03-10T13:13:18.233 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid bd98ed20-1c82-11f1-9239-ff903ae4ee6f -- ceph mon dump -f json 2026-03-10T13:13:18.530 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/mon.a/config 2026-03-10T13:13:18.776 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:18 vm07 ceph-mon[52048]: Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:13:18.776 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:18 vm07 ceph-mon[52048]: Updating vm07:/var/lib/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/config/ceph.client.admin.keyring 2026-03-10T13:13:18.776 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:18 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/4245609081' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-10T13:13:18.776 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:18 vm07 ceph-mon[52048]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T13:13:18.776 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:18 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:18.776 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:18 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:13:18.777 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:18 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:13:18.777 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:18 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:13:18.777 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:18 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:18.777 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:18 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:18.777 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:18 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T13:13:18.777 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:18 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T13:13:18.777 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:18 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:13:18.777 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:18 vm07 ceph-mon[52048]: mgrmap e13: a(active, since 6s) 2026-03-10T13:13:18.777 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:18 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:18.777 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:18 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:18.777 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:13:18.777 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":1,"fsid":"bd98ed20-1c82-11f1-9239-ff903ae4ee6f","modified":"2026-03-10T13:12:48.686420Z","created":"2026-03-10T13:12:48.686420Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:3300","nonce":0},{"type":"v1","addr":"192.168.123.107:6789","nonce":0}]},"addr":"192.168.123.107:6789/0","public_addr":"192.168.123.107:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T13:13:18.777 INFO:teuthology.orchestra.run.vm07.stderr:dumped monmap epoch 1 2026-03-10T13:13:18.926 INFO:tasks.cephadm:Generating final ceph.conf file... 2026-03-10T13:13:18.926 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid bd98ed20-1c82-11f1-9239-ff903ae4ee6f -- ceph config generate-minimal-conf 2026-03-10T13:13:19.078 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/mon.a/config 2026-03-10T13:13:19.289 INFO:teuthology.orchestra.run.vm07.stdout:# minimal ceph.conf for bd98ed20-1c82-11f1-9239-ff903ae4ee6f 2026-03-10T13:13:19.289 INFO:teuthology.orchestra.run.vm07.stdout:[global] 2026-03-10T13:13:19.289 INFO:teuthology.orchestra.run.vm07.stdout: fsid = bd98ed20-1c82-11f1-9239-ff903ae4ee6f 2026-03-10T13:13:19.289 INFO:teuthology.orchestra.run.vm07.stdout: mon_host = [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] 2026-03-10T13:13:19.436 INFO:tasks.cephadm:Distributing (final) config and client.admin keyring... 2026-03-10T13:13:19.436 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-10T13:13:19.436 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/etc/ceph/ceph.conf 2026-03-10T13:13:19.459 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-10T13:13:19.459 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:13:19.524 INFO:tasks.cephadm:Adding mgr.a on vm07 2026-03-10T13:13:19.524 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid bd98ed20-1c82-11f1-9239-ff903ae4ee6f -- ceph orch apply mgr '1;vm07=a' 2026-03-10T13:13:19.717 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/mon.a/config 2026-03-10T13:13:19.736 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:19 vm07 ceph-mon[52048]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "1;vm07:192.168.123.107=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:13:19.736 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:19 vm07 ceph-mon[52048]: Saving service mon spec with placement vm07:192.168.123.107=a;count:1 2026-03-10T13:13:19.736 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:19 vm07 ceph-mon[52048]: Reconfiguring mon.a (unknown last config time)... 2026-03-10T13:13:19.736 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:19 vm07 ceph-mon[52048]: Reconfiguring daemon mon.a on vm07 2026-03-10T13:13:19.737 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:19 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/2782648735' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T13:13:19.737 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:19 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/2874301219' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:13:19.944 INFO:teuthology.orchestra.run.vm07.stdout:Scheduled mgr update... 2026-03-10T13:13:20.103 INFO:tasks.cephadm:Deploying OSDs... 2026-03-10T13:13:20.103 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-10T13:13:20.103 DEBUG:teuthology.orchestra.run.vm07:> dd if=/scratch_devs of=/dev/stdout 2026-03-10T13:13:20.132 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T13:13:20.132 DEBUG:teuthology.orchestra.run.vm07:> ls /dev/[sv]d? 2026-03-10T13:13:20.203 INFO:teuthology.orchestra.run.vm07.stdout:/dev/vda 2026-03-10T13:13:20.204 INFO:teuthology.orchestra.run.vm07.stdout:/dev/vdb 2026-03-10T13:13:20.204 INFO:teuthology.orchestra.run.vm07.stdout:/dev/vdc 2026-03-10T13:13:20.204 INFO:teuthology.orchestra.run.vm07.stdout:/dev/vdd 2026-03-10T13:13:20.204 INFO:teuthology.orchestra.run.vm07.stdout:/dev/vde 2026-03-10T13:13:20.204 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-10T13:13:20.204 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-10T13:13:20.204 DEBUG:teuthology.orchestra.run.vm07:> stat /dev/vdb 2026-03-10T13:13:20.262 INFO:teuthology.orchestra.run.vm07.stdout: File: /dev/vdb 2026-03-10T13:13:20.262 INFO:teuthology.orchestra.run.vm07.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T13:13:20.262 INFO:teuthology.orchestra.run.vm07.stdout:Device: 6h/6d Inode: 250 Links: 1 Device type: fc,10 2026-03-10T13:13:20.262 INFO:teuthology.orchestra.run.vm07.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T13:13:20.262 INFO:teuthology.orchestra.run.vm07.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T13:13:20.262 INFO:teuthology.orchestra.run.vm07.stdout:Access: 2026-03-10 13:13:16.905775715 +0000 2026-03-10T13:13:20.262 INFO:teuthology.orchestra.run.vm07.stdout:Modify: 2026-03-10 13:11:00.335124613 +0000 2026-03-10T13:13:20.262 INFO:teuthology.orchestra.run.vm07.stdout:Change: 2026-03-10 13:11:00.335124613 +0000 2026-03-10T13:13:20.262 INFO:teuthology.orchestra.run.vm07.stdout: Birth: 2026-03-10 13:08:31.263000000 +0000 2026-03-10T13:13:20.262 DEBUG:teuthology.orchestra.run.vm07:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-10T13:13:20.330 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records in 2026-03-10T13:13:20.330 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records out 2026-03-10T13:13:20.330 INFO:teuthology.orchestra.run.vm07.stderr:512 bytes copied, 0.000103714 s, 4.9 MB/s 2026-03-10T13:13:20.331 DEBUG:teuthology.orchestra.run.vm07:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-10T13:13:20.388 DEBUG:teuthology.orchestra.run.vm07:> stat /dev/vdc 2026-03-10T13:13:20.445 INFO:teuthology.orchestra.run.vm07.stdout: File: /dev/vdc 2026-03-10T13:13:20.445 INFO:teuthology.orchestra.run.vm07.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T13:13:20.446 INFO:teuthology.orchestra.run.vm07.stdout:Device: 6h/6d Inode: 251 Links: 1 Device type: fc,20 2026-03-10T13:13:20.446 INFO:teuthology.orchestra.run.vm07.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T13:13:20.446 INFO:teuthology.orchestra.run.vm07.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T13:13:20.446 INFO:teuthology.orchestra.run.vm07.stdout:Access: 2026-03-10 13:13:16.909775719 +0000 2026-03-10T13:13:20.446 INFO:teuthology.orchestra.run.vm07.stdout:Modify: 2026-03-10 13:11:00.338124616 +0000 2026-03-10T13:13:20.446 INFO:teuthology.orchestra.run.vm07.stdout:Change: 2026-03-10 13:11:00.338124616 +0000 2026-03-10T13:13:20.446 INFO:teuthology.orchestra.run.vm07.stdout: Birth: 2026-03-10 13:08:31.267000000 +0000 2026-03-10T13:13:20.446 DEBUG:teuthology.orchestra.run.vm07:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-10T13:13:20.509 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records in 2026-03-10T13:13:20.509 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records out 2026-03-10T13:13:20.509 INFO:teuthology.orchestra.run.vm07.stderr:512 bytes copied, 0.000182282 s, 2.8 MB/s 2026-03-10T13:13:20.510 DEBUG:teuthology.orchestra.run.vm07:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-10T13:13:20.567 DEBUG:teuthology.orchestra.run.vm07:> stat /dev/vdd 2026-03-10T13:13:20.623 INFO:teuthology.orchestra.run.vm07.stdout: File: /dev/vdd 2026-03-10T13:13:20.623 INFO:teuthology.orchestra.run.vm07.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T13:13:20.623 INFO:teuthology.orchestra.run.vm07.stdout:Device: 6h/6d Inode: 256 Links: 1 Device type: fc,30 2026-03-10T13:13:20.623 INFO:teuthology.orchestra.run.vm07.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T13:13:20.623 INFO:teuthology.orchestra.run.vm07.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T13:13:20.623 INFO:teuthology.orchestra.run.vm07.stdout:Access: 2026-03-10 13:13:16.913775722 +0000 2026-03-10T13:13:20.623 INFO:teuthology.orchestra.run.vm07.stdout:Modify: 2026-03-10 13:11:00.336124614 +0000 2026-03-10T13:13:20.623 INFO:teuthology.orchestra.run.vm07.stdout:Change: 2026-03-10 13:11:00.336124614 +0000 2026-03-10T13:13:20.623 INFO:teuthology.orchestra.run.vm07.stdout: Birth: 2026-03-10 13:08:31.279000000 +0000 2026-03-10T13:13:20.624 DEBUG:teuthology.orchestra.run.vm07:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-10T13:13:20.685 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records in 2026-03-10T13:13:20.686 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records out 2026-03-10T13:13:20.686 INFO:teuthology.orchestra.run.vm07.stderr:512 bytes copied, 0.000209504 s, 2.4 MB/s 2026-03-10T13:13:20.686 DEBUG:teuthology.orchestra.run.vm07:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-10T13:13:20.742 DEBUG:teuthology.orchestra.run.vm07:> stat /dev/vde 2026-03-10T13:13:20.798 INFO:teuthology.orchestra.run.vm07.stdout: File: /dev/vde 2026-03-10T13:13:20.798 INFO:teuthology.orchestra.run.vm07.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T13:13:20.798 INFO:teuthology.orchestra.run.vm07.stdout:Device: 6h/6d Inode: 257 Links: 1 Device type: fc,40 2026-03-10T13:13:20.798 INFO:teuthology.orchestra.run.vm07.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T13:13:20.798 INFO:teuthology.orchestra.run.vm07.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T13:13:20.798 INFO:teuthology.orchestra.run.vm07.stdout:Access: 2026-03-10 13:13:16.918775727 +0000 2026-03-10T13:13:20.798 INFO:teuthology.orchestra.run.vm07.stdout:Modify: 2026-03-10 13:11:00.361124638 +0000 2026-03-10T13:13:20.798 INFO:teuthology.orchestra.run.vm07.stdout:Change: 2026-03-10 13:11:00.361124638 +0000 2026-03-10T13:13:20.798 INFO:teuthology.orchestra.run.vm07.stdout: Birth: 2026-03-10 13:08:31.281000000 +0000 2026-03-10T13:13:20.798 DEBUG:teuthology.orchestra.run.vm07:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-10T13:13:20.860 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records in 2026-03-10T13:13:20.860 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records out 2026-03-10T13:13:20.860 INFO:teuthology.orchestra.run.vm07.stderr:512 bytes copied, 0.000172562 s, 3.0 MB/s 2026-03-10T13:13:20.861 DEBUG:teuthology.orchestra.run.vm07:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-10T13:13:20.917 INFO:tasks.cephadm:Deploying osd.0 on vm07 with /dev/vde... 2026-03-10T13:13:20.917 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid bd98ed20-1c82-11f1-9239-ff903ae4ee6f -- lvm zap /dev/vde 2026-03-10T13:13:20.943 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:20 vm07 ceph-mon[52048]: from='client.14182 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "1;vm07=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:13:20.943 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:20 vm07 ceph-mon[52048]: Saving service mgr spec with placement vm07=a;count:1 2026-03-10T13:13:20.943 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:20 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:20.943 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:20 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:13:20.943 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:20 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:13:20.943 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:20 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:13:20.943 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:20 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:20.943 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:20 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:20.943 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:20 vm07 ceph-mon[52048]: Reconfiguring mgr.a (unknown last config time)... 2026-03-10T13:13:20.943 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:20 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.a", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T13:13:20.943 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:20 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T13:13:20.943 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:20 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:13:20.943 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:20 vm07 ceph-mon[52048]: Reconfiguring daemon mgr.a on vm07 2026-03-10T13:13:20.943 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:20 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:20.943 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:20 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:21.094 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/mon.a/config 2026-03-10T13:13:22.074 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:13:22.091 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid bd98ed20-1c82-11f1-9239-ff903ae4ee6f -- ceph orch daemon add osd vm07:/dev/vde 2026-03-10T13:13:22.251 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/mon.a/config 2026-03-10T13:13:22.534 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:22 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T13:13:22.837 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:22 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T13:13:22.838 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:22 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:13:23.736 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:23 vm07 ceph-mon[52048]: from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:13:23.736 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:23 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/2939887292' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "62c85112-9da4-4845-b7c8-809946f80c39"}]: dispatch 2026-03-10T13:13:23.736 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:23 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/2939887292' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "62c85112-9da4-4845-b7c8-809946f80c39"}]': finished 2026-03-10T13:13:23.736 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:23 vm07 ceph-mon[52048]: osdmap e5: 1 total, 0 up, 1 in 2026-03-10T13:13:23.736 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:23 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:13:24.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:24 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/941053956' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T13:13:28.038 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:27 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T13:13:28.038 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:27 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:13:29.341 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:28 vm07 ceph-mon[52048]: Deploying daemon osd.0 on vm07 2026-03-10T13:13:30.325 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:30 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:13:30.325 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:30 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:30.325 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:30 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:13:30.325 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:30 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:13:30.325 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:30 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:30.325 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:30 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:31.260 INFO:teuthology.orchestra.run.vm07.stdout:Created osd(s) 0 on host 'vm07' 2026-03-10T13:13:31.385 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:31 vm07 ceph-mon[52048]: from='osd.0 [v2:192.168.123.107:6802/546576916,v1:192.168.123.107:6803/546576916]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T13:13:31.385 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:31 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:13:31.385 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:31 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:31.385 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:31 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:13:31.385 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:31 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:13:31.385 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:31 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:13:31.385 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:31 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:31.386 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:31 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:31.409 DEBUG:teuthology.orchestra.run.vm07:osd.0> sudo journalctl -f -n 0 -u ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@osd.0.service 2026-03-10T13:13:31.411 INFO:tasks.cephadm:Deploying osd.1 on vm07 with /dev/vdd... 2026-03-10T13:13:31.411 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid bd98ed20-1c82-11f1-9239-ff903ae4ee6f -- lvm zap /dev/vdd 2026-03-10T13:13:31.606 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/mon.a/config 2026-03-10T13:13:32.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:32 vm07 ceph-mon[52048]: from='osd.0 [v2:192.168.123.107:6802/546576916,v1:192.168.123.107:6803/546576916]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T13:13:32.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:32 vm07 ceph-mon[52048]: osdmap e6: 1 total, 0 up, 1 in 2026-03-10T13:13:32.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:32 vm07 ceph-mon[52048]: from='osd.0 [v2:192.168.123.107:6802/546576916,v1:192.168.123.107:6803/546576916]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T13:13:32.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:32 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:13:32.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:32 vm07 ceph-mon[52048]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:13:32.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:32 vm07 ceph-mon[52048]: from='osd.0 [v2:192.168.123.107:6802/546576916,v1:192.168.123.107:6803/546576916]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-10T13:13:32.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:32 vm07 ceph-mon[52048]: osdmap e7: 1 total, 0 up, 1 in 2026-03-10T13:13:32.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:32 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:13:32.640 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:13:32.657 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid bd98ed20-1c82-11f1-9239-ff903ae4ee6f -- ceph orch daemon add osd vm07:/dev/vdd 2026-03-10T13:13:32.838 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/mon.a/config 2026-03-10T13:13:33.300 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:33 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:13:33.300 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:33 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T13:13:33.300 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:33 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T13:13:33.300 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:33 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:13:33.569 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:13:33 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0[60586]: 2026-03-10T13:13:33.309+0000 7fa2231e6640 -1 osd.0 0 waiting for initial osdmap 2026-03-10T13:13:33.569 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:13:33 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0[60586]: 2026-03-10T13:13:33.316+0000 7fa21e80f640 -1 osd.0 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T13:13:34.341 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:34 vm07 ceph-mon[52048]: purged_snaps scrub starts 2026-03-10T13:13:34.341 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:34 vm07 ceph-mon[52048]: purged_snaps scrub ok 2026-03-10T13:13:34.341 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:34 vm07 ceph-mon[52048]: from='client.14193 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:13:34.341 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:34 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:13:34.341 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:34 vm07 ceph-mon[52048]: from='osd.0 [v2:192.168.123.107:6802/546576916,v1:192.168.123.107:6803/546576916]' entity='osd.0' 2026-03-10T13:13:34.341 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:34 vm07 ceph-mon[52048]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:13:34.341 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:34 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/613699152' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "313e9150-62ce-49f1-94b9-336bc0739e4e"}]: dispatch 2026-03-10T13:13:34.341 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:34 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/613699152' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "313e9150-62ce-49f1-94b9-336bc0739e4e"}]': finished 2026-03-10T13:13:34.341 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:34 vm07 ceph-mon[52048]: osd.0 [v2:192.168.123.107:6802/546576916,v1:192.168.123.107:6803/546576916] boot 2026-03-10T13:13:34.341 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:34 vm07 ceph-mon[52048]: osdmap e8: 2 total, 1 up, 2 in 2026-03-10T13:13:34.341 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:34 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:13:34.341 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:34 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:13:35.621 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:35 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/1042322686' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T13:13:35.621 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:35 vm07 ceph-mon[52048]: osdmap e9: 2 total, 1 up, 2 in 2026-03-10T13:13:35.621 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:35 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:13:36.385 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:36 vm07 ceph-mon[52048]: pgmap v11: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:13:38.033 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:37 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:38.033 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:37 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:38.033 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:37 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:38.033 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:37 vm07 ceph-mon[52048]: pgmap v12: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:13:38.871 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:38 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T13:13:38.871 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:38 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:13:39.959 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:39 vm07 ceph-mon[52048]: Deploying daemon osd.1 on vm07 2026-03-10T13:13:39.959 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:39 vm07 ceph-mon[52048]: pgmap v13: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:13:41.249 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:41 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:13:41.249 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:41 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:13:41.249 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:41 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:41.249 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:41 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:13:41.249 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:41 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:41.249 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:41 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:42.307 INFO:teuthology.orchestra.run.vm07.stdout:Created osd(s) 1 on host 'vm07' 2026-03-10T13:13:42.340 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:42 vm07 ceph-mon[52048]: pgmap v14: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:13:42.468 DEBUG:teuthology.orchestra.run.vm07:osd.1> sudo journalctl -f -n 0 -u ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@osd.1.service 2026-03-10T13:13:42.469 INFO:tasks.cephadm:Deploying osd.2 on vm07 with /dev/vdc... 2026-03-10T13:13:42.470 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid bd98ed20-1c82-11f1-9239-ff903ae4ee6f -- lvm zap /dev/vdc 2026-03-10T13:13:42.694 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/mon.a/config 2026-03-10T13:13:42.973 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 10 13:13:42 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-1[63991]: 2026-03-10T13:13:42.960+0000 7f469ef69740 -1 osd.1 0 log_to_monitors true 2026-03-10T13:13:43.227 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:43 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:13:43.227 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:43 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:43.227 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:43 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:13:43.227 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:43 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:13:43.227 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:43 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:13:43.227 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:43 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:43.227 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:43 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:43.227 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:43 vm07 ceph-mon[52048]: from='osd.1 [v2:192.168.123.107:6810/3813827411,v1:192.168.123.107:6811/3813827411]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T13:13:43.725 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:13:43.744 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid bd98ed20-1c82-11f1-9239-ff903ae4ee6f -- ceph orch daemon add osd vm07:/dev/vdc 2026-03-10T13:13:43.916 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/mon.a/config 2026-03-10T13:13:44.320 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:44 vm07 ceph-mon[52048]: from='osd.1 [v2:192.168.123.107:6810/3813827411,v1:192.168.123.107:6811/3813827411]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T13:13:44.320 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:44 vm07 ceph-mon[52048]: osdmap e10: 2 total, 1 up, 2 in 2026-03-10T13:13:44.320 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:44 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:13:44.320 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:44 vm07 ceph-mon[52048]: from='osd.1 [v2:192.168.123.107:6810/3813827411,v1:192.168.123.107:6811/3813827411]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T13:13:44.320 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:44 vm07 ceph-mon[52048]: pgmap v16: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:13:44.320 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:44 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T13:13:44.320 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:44 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T13:13:44.320 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:44 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:13:44.568 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 10 13:13:44 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-1[63991]: 2026-03-10T13:13:44.321+0000 7f469b6fd640 -1 osd.1 0 waiting for initial osdmap 2026-03-10T13:13:44.568 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 10 13:13:44 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-1[63991]: 2026-03-10T13:13:44.325+0000 7f4696d14640 -1 osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T13:13:45.581 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:45 vm07 ceph-mon[52048]: from='client.14202 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:13:45.581 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:45 vm07 ceph-mon[52048]: from='osd.1 [v2:192.168.123.107:6810/3813827411,v1:192.168.123.107:6811/3813827411]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-10T13:13:45.581 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:45 vm07 ceph-mon[52048]: osdmap e11: 2 total, 1 up, 2 in 2026-03-10T13:13:45.581 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:45 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:13:45.581 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:45 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:13:45.581 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:45 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/1959501701' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d07c8bd9-87b1-4074-add0-71507e9620df"}]: dispatch 2026-03-10T13:13:45.581 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:45 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/1959501701' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d07c8bd9-87b1-4074-add0-71507e9620df"}]': finished 2026-03-10T13:13:45.581 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:45 vm07 ceph-mon[52048]: osd.1 [v2:192.168.123.107:6810/3813827411,v1:192.168.123.107:6811/3813827411] boot 2026-03-10T13:13:45.581 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:45 vm07 ceph-mon[52048]: osdmap e12: 3 total, 2 up, 3 in 2026-03-10T13:13:45.581 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:45 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:13:45.581 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:45 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:13:46.590 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:46 vm07 ceph-mon[52048]: purged_snaps scrub starts 2026-03-10T13:13:46.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:46 vm07 ceph-mon[52048]: purged_snaps scrub ok 2026-03-10T13:13:46.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:46 vm07 ceph-mon[52048]: pgmap v19: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:13:46.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:46 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/2711030295' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T13:13:47.590 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:47 vm07 ceph-mon[52048]: osdmap e13: 3 total, 2 up, 3 in 2026-03-10T13:13:47.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:47 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:13:48.590 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:48 vm07 ceph-mon[52048]: pgmap v21: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:13:49.494 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:49 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T13:13:49.494 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:49 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:13:50.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:50 vm07 ceph-mon[52048]: Deploying daemon osd.2 on vm07 2026-03-10T13:13:50.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:50 vm07 ceph-mon[52048]: pgmap v22: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:13:51.806 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:51 vm07 ceph-mon[52048]: pgmap v23: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:13:51.806 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:51 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:13:51.807 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:51 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:51.807 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:51 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:13:51.807 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:51 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:13:51.807 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:51 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:51.807 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:51 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:53.009 INFO:teuthology.orchestra.run.vm07.stdout:Created osd(s) 2 on host 'vm07' 2026-03-10T13:13:53.166 DEBUG:teuthology.orchestra.run.vm07:osd.2> sudo journalctl -f -n 0 -u ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@osd.2.service 2026-03-10T13:13:53.167 INFO:tasks.cephadm:Waiting for 3 OSDs to come up... 2026-03-10T13:13:53.167 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid bd98ed20-1c82-11f1-9239-ff903ae4ee6f -- ceph osd stat -f json 2026-03-10T13:13:53.238 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:53 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:13:53.238 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:53 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:13:53.238 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:53 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:53.238 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:53 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:13:53.238 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:53 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:13:53.238 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:53 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:53.238 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:53 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:53.238 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:13:53 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2[67090]: 2026-03-10T13:13:53.237+0000 7f6172fba740 -1 osd.2 0 log_to_monitors true 2026-03-10T13:13:53.390 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/mon.a/config 2026-03-10T13:13:53.630 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:13:53.780 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":13,"num_osds":3,"num_up_osds":2,"osd_up_since":1773148425,"num_in_osds":3,"osd_in_since":1773148425,"num_remapped_pgs":0} 2026-03-10T13:13:54.042 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:54 vm07 ceph-mon[52048]: from='osd.2 [v2:192.168.123.107:6818/1431031109,v1:192.168.123.107:6819/1431031109]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T13:13:54.042 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:54 vm07 ceph-mon[52048]: pgmap v24: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:13:54.042 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:54 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/411146906' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T13:13:54.781 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid bd98ed20-1c82-11f1-9239-ff903ae4ee6f -- ceph osd stat -f json 2026-03-10T13:13:54.952 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/mon.a/config 2026-03-10T13:13:55.194 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:13:55.327 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:55 vm07 ceph-mon[52048]: from='osd.2 [v2:192.168.123.107:6818/1431031109,v1:192.168.123.107:6819/1431031109]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T13:13:55.327 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:55 vm07 ceph-mon[52048]: osdmap e14: 3 total, 2 up, 3 in 2026-03-10T13:13:55.327 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:55 vm07 ceph-mon[52048]: from='osd.2 [v2:192.168.123.107:6818/1431031109,v1:192.168.123.107:6819/1431031109]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T13:13:55.327 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:55 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:13:55.358 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":15,"num_osds":3,"num_up_osds":2,"osd_up_since":1773148425,"num_in_osds":3,"osd_in_since":1773148425,"num_remapped_pgs":0} 2026-03-10T13:13:56.340 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:56 vm07 ceph-mon[52048]: from='osd.2 [v2:192.168.123.107:6818/1431031109,v1:192.168.123.107:6819/1431031109]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-10T13:13:56.341 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:56 vm07 ceph-mon[52048]: osdmap e15: 3 total, 2 up, 3 in 2026-03-10T13:13:56.341 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:56 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:13:56.341 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:56 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:13:56.341 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:56 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/670201900' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T13:13:56.341 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:56 vm07 ceph-mon[52048]: pgmap v27: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:13:56.341 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:56 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:13:56.341 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:13:56 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2[67090]: 2026-03-10T13:13:56.080+0000 7f616f74e640 -1 osd.2 0 waiting for initial osdmap 2026-03-10T13:13:56.341 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:13:56 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2[67090]: 2026-03-10T13:13:56.087+0000 7f616a564640 -1 osd.2 15 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T13:13:56.359 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid bd98ed20-1c82-11f1-9239-ff903ae4ee6f -- ceph osd stat -f json 2026-03-10T13:13:56.515 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/mon.a/config 2026-03-10T13:13:56.735 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:13:56.905 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":15,"num_osds":3,"num_up_osds":2,"osd_up_since":1773148425,"num_in_osds":3,"osd_in_since":1773148425,"num_remapped_pgs":0} 2026-03-10T13:13:57.280 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:57 vm07 ceph-mon[52048]: purged_snaps scrub starts 2026-03-10T13:13:57.280 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:57 vm07 ceph-mon[52048]: purged_snaps scrub ok 2026-03-10T13:13:57.280 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:57 vm07 ceph-mon[52048]: from='osd.2 [v2:192.168.123.107:6818/1431031109,v1:192.168.123.107:6819/1431031109]' entity='osd.2' 2026-03-10T13:13:57.280 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:57 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/1054439295' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T13:13:57.280 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:57 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:57.280 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:57 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:57.280 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:57 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:13:57.280 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:57 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:57.280 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:57 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:13:57.280 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:57 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:13:57.280 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:57 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:57.280 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:57 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:13:57.906 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid bd98ed20-1c82-11f1-9239-ff903ae4ee6f -- ceph osd stat -f json 2026-03-10T13:13:58.085 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/mon.a/config 2026-03-10T13:13:58.131 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:58 vm07 ceph-mon[52048]: Detected new or changed devices on vm07 2026-03-10T13:13:58.320 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:13:58.464 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:58 vm07 ceph-mon[52048]: osd.2 [v2:192.168.123.107:6818/1431031109,v1:192.168.123.107:6819/1431031109] boot 2026-03-10T13:13:58.464 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:58 vm07 ceph-mon[52048]: osdmap e16: 3 total, 3 up, 3 in 2026-03-10T13:13:58.464 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:58 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:13:58.464 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:58 vm07 ceph-mon[52048]: pgmap v29: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:13:58.464 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:58 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:13:58.464 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:58 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:58.464 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:58 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:58.464 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:58 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:13:58.464 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:58 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:13:58.464 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:58 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:13:58.464 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:58 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:58.464 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:58 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:13:58.491 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":17,"num_osds":3,"num_up_osds":3,"osd_up_since":1773148437,"num_in_osds":3,"osd_in_since":1773148425,"num_remapped_pgs":0} 2026-03-10T13:13:58.491 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid bd98ed20-1c82-11f1-9239-ff903ae4ee6f -- ceph osd dump --format=json 2026-03-10T13:13:58.657 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/mon.a/config 2026-03-10T13:13:58.875 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:13:58.875 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":17,"fsid":"bd98ed20-1c82-11f1-9239-ff903ae4ee6f","created":"2026-03-10T13:12:49.860773+0000","modified":"2026-03-10T13:13:58.124209+0000","last_up_change":"2026-03-10T13:13:57.083601+0000","last_in_change":"2026-03-10T13:13:45.092552+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":8,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":3,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T13:13:57.550111+0000","flags":32769,"flags_names":"hashpspool,creating","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"17","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{},"read_balance":{"score_type":"Fair distribution","score_acting":3,"score_stable":3,"optimal_score":1,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"62c85112-9da4-4845-b7c8-809946f80c39","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6802","nonce":546576916},{"type":"v1","addr":"192.168.123.107:6803","nonce":546576916}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6804","nonce":546576916},{"type":"v1","addr":"192.168.123.107:6805","nonce":546576916}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6808","nonce":546576916},{"type":"v1","addr":"192.168.123.107:6809","nonce":546576916}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6806","nonce":546576916},{"type":"v1","addr":"192.168.123.107:6807","nonce":546576916}]},"public_addr":"192.168.123.107:6803/546576916","cluster_addr":"192.168.123.107:6805/546576916","heartbeat_back_addr":"192.168.123.107:6809/546576916","heartbeat_front_addr":"192.168.123.107:6807/546576916","state":["exists","up"]},{"osd":1,"uuid":"313e9150-62ce-49f1-94b9-336bc0739e4e","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":12,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6810","nonce":3813827411},{"type":"v1","addr":"192.168.123.107:6811","nonce":3813827411}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6812","nonce":3813827411},{"type":"v1","addr":"192.168.123.107:6813","nonce":3813827411}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6816","nonce":3813827411},{"type":"v1","addr":"192.168.123.107:6817","nonce":3813827411}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6814","nonce":3813827411},{"type":"v1","addr":"192.168.123.107:6815","nonce":3813827411}]},"public_addr":"192.168.123.107:6811/3813827411","cluster_addr":"192.168.123.107:6813/3813827411","heartbeat_back_addr":"192.168.123.107:6817/3813827411","heartbeat_front_addr":"192.168.123.107:6815/3813827411","state":["exists","up"]},{"osd":2,"uuid":"d07c8bd9-87b1-4074-add0-71507e9620df","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":16,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6818","nonce":1431031109},{"type":"v1","addr":"192.168.123.107:6819","nonce":1431031109}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6820","nonce":1431031109},{"type":"v1","addr":"192.168.123.107:6821","nonce":1431031109}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6824","nonce":1431031109},{"type":"v1","addr":"192.168.123.107:6825","nonce":1431031109}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6822","nonce":1431031109},{"type":"v1","addr":"192.168.123.107:6823","nonce":1431031109}]},"public_addr":"192.168.123.107:6819/1431031109","cluster_addr":"192.168.123.107:6821/1431031109","heartbeat_back_addr":"192.168.123.107:6825/1431031109","heartbeat_front_addr":"192.168.123.107:6823/1431031109","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T13:13:32.234387+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T13:13:43.971888+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T13:13:54.234160+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.107:0/2631950073":"2026-03-11T13:13:11.423073+0000","192.168.123.107:6801/2892363095":"2026-03-11T13:13:11.423073+0000","192.168.123.107:6800/2892363095":"2026-03-11T13:13:11.423073+0000","192.168.123.107:0/538258474":"2026-03-11T13:13:11.423073+0000","192.168.123.107:0/2362313244":"2026-03-11T13:13:11.423073+0000","192.168.123.107:0/241309621":"2026-03-11T13:13:01.205715+0000","192.168.123.107:0/2936972974":"2026-03-11T13:13:01.205715+0000","192.168.123.107:6800/301384821":"2026-03-11T13:13:01.205715+0000","192.168.123.107:6801/301384821":"2026-03-11T13:13:01.205715+0000","192.168.123.107:0/1844477233":"2026-03-11T13:13:01.205715+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T13:13:59.027 INFO:tasks.cephadm.ceph_manager.ceph:[{'pool': 1, 'pool_name': '.mgr', 'create_time': '2026-03-10T13:13:57.550111+0000', 'flags': 32769, 'flags_names': 'hashpspool,creating', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'is_stretch_pool': False, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '17', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {}, 'read_balance': {'score_type': 'Fair distribution', 'score_acting': 3, 'score_stable': 3, 'optimal_score': 1, 'raw_score_acting': 3, 'raw_score_stable': 3, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}] 2026-03-10T13:13:59.028 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid bd98ed20-1c82-11f1-9239-ff903ae4ee6f -- ceph osd pool get .mgr pg_num 2026-03-10T13:13:59.211 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/mon.a/config 2026-03-10T13:13:59.257 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:59 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T13:13:59.257 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:59 vm07 ceph-mon[52048]: osdmap e17: 3 total, 3 up, 3 in 2026-03-10T13:13:59.257 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:59 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:13:59.257 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:59 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/2114586534' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T13:13:59.257 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:59 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/3780755085' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T13:13:59.257 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:13:59 vm07 sudo[69018]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vde 2026-03-10T13:13:59.258 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:13:59 vm07 sudo[69018]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-10T13:13:59.258 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:13:59 vm07 sudo[69018]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-10T13:13:59.258 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:13:59 vm07 sudo[69018]: pam_unix(sudo:session): session closed for user root 2026-03-10T13:13:59.457 INFO:teuthology.orchestra.run.vm07.stdout:pg_num: 1 2026-03-10T13:13:59.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:59 vm07 sudo[69067]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda 2026-03-10T13:13:59.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:59 vm07 sudo[69067]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-10T13:13:59.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:59 vm07 sudo[69067]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-10T13:13:59.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:13:59 vm07 sudo[69067]: pam_unix(sudo:session): session closed for user root 2026-03-10T13:13:59.591 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 10 13:13:59 vm07 sudo[69033]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vdd 2026-03-10T13:13:59.591 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 10 13:13:59 vm07 sudo[69033]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-10T13:13:59.591 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 10 13:13:59 vm07 sudo[69033]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-10T13:13:59.591 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 10 13:13:59 vm07 sudo[69033]: pam_unix(sudo:session): session closed for user root 2026-03-10T13:13:59.591 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:13:59 vm07 sudo[69055]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vdc 2026-03-10T13:13:59.591 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:13:59 vm07 sudo[69055]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-10T13:13:59.591 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:13:59 vm07 sudo[69055]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-10T13:13:59.591 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:13:59 vm07 sudo[69055]: pam_unix(sudo:session): session closed for user root 2026-03-10T13:13:59.626 INFO:tasks.cephadm:Setting up client nodes... 2026-03-10T13:13:59.627 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid bd98ed20-1c82-11f1-9239-ff903ae4ee6f -- ceph auth get-or-create client.0 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-10T13:13:59.791 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/mon.a/config 2026-03-10T13:14:00.057 INFO:teuthology.orchestra.run.vm07.stdout:[client.0] 2026-03-10T13:14:00.057 INFO:teuthology.orchestra.run.vm07.stdout: key = AQAYGbBp6Y1IAxAA+zfsd61n3ryLhfu9PgzepQ== 2026-03-10T13:14:00.210 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-10T13:14:00.211 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/etc/ceph/ceph.client.0.keyring 2026-03-10T13:14:00.211 DEBUG:teuthology.orchestra.run.vm07:> sudo chmod 0644 /etc/ceph/ceph.client.0.keyring 2026-03-10T13:14:00.244 INFO:tasks.ceph:Waiting until ceph daemons up and pgs clean... 2026-03-10T13:14:00.244 INFO:tasks.cephadm.ceph_manager.ceph:waiting for mgr available 2026-03-10T13:14:00.244 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid bd98ed20-1c82-11f1-9239-ff903ae4ee6f -- ceph mgr dump --format=json 2026-03-10T13:14:00.446 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/mon.a/config 2026-03-10T13:14:00.470 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:00 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T13:14:00.470 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:00 vm07 ceph-mon[52048]: osdmap e18: 3 total, 3 up, 3 in 2026-03-10T13:14:00.470 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:00 vm07 ceph-mon[52048]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T13:14:00.470 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:00 vm07 ceph-mon[52048]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T13:14:00.470 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:00 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:14:00.470 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:00 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/2936813943' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T13:14:00.470 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:00 vm07 ceph-mon[52048]: pgmap v32: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:14:00.470 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:00 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/611779413' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T13:14:00.470 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:00 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/611779413' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T13:14:00.692 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:14:00.862 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":14,"flags":0,"active_gid":14150,"active_name":"a","active_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6800","nonce":3450077655},{"type":"v1","addr":"192.168.123.107:6801","nonce":3450077655}]},"active_addr":"192.168.123.107:6801/3450077655","active_change":"2026-03-10T13:13:11.423177+0000","active_mgr_features":4540701547738038271,"available":true,"standbys":[],"modules":["cephadm","dashboard","iostat","nfs","restful"],"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}],"services":{"dashboard":"https://192.168.123.107:8443/"},"always_on_modules":{"octopus":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"pacific":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"quincy":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"reef":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"squid":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"]},"force_disabled_modules":{},"last_failure_osd_epoch":3,"active_clients":[{"name":"devicehealth","addrvec":[{"type":"v2","addr":"192.168.123.107:0","nonce":2859751287}]},{"name":"libcephsqlite","addrvec":[{"type":"v2","addr":"192.168.123.107:0","nonce":2925088531}]},{"name":"rbd_support","addrvec":[{"type":"v2","addr":"192.168.123.107:0","nonce":1738500406}]},{"name":"volumes","addrvec":[{"type":"v2","addr":"192.168.123.107:0","nonce":1442974072}]}]} 2026-03-10T13:14:00.862 INFO:tasks.cephadm.ceph_manager.ceph:mgr available! 2026-03-10T13:14:00.863 INFO:tasks.cephadm.ceph_manager.ceph:waiting for all up 2026-03-10T13:14:00.863 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid bd98ed20-1c82-11f1-9239-ff903ae4ee6f -- ceph osd dump --format=json 2026-03-10T13:14:01.039 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/mon.a/config 2026-03-10T13:14:01.271 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:14:01.271 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":19,"fsid":"bd98ed20-1c82-11f1-9239-ff903ae4ee6f","created":"2026-03-10T13:12:49.860773+0000","modified":"2026-03-10T13:14:00.138958+0000","last_up_change":"2026-03-10T13:13:57.083601+0000","last_in_change":"2026-03-10T13:13:45.092552+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":8,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":3,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T13:13:57.550111+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"19","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":3,"score_stable":3,"optimal_score":1,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"62c85112-9da4-4845-b7c8-809946f80c39","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6802","nonce":546576916},{"type":"v1","addr":"192.168.123.107:6803","nonce":546576916}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6804","nonce":546576916},{"type":"v1","addr":"192.168.123.107:6805","nonce":546576916}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6808","nonce":546576916},{"type":"v1","addr":"192.168.123.107:6809","nonce":546576916}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6806","nonce":546576916},{"type":"v1","addr":"192.168.123.107:6807","nonce":546576916}]},"public_addr":"192.168.123.107:6803/546576916","cluster_addr":"192.168.123.107:6805/546576916","heartbeat_back_addr":"192.168.123.107:6809/546576916","heartbeat_front_addr":"192.168.123.107:6807/546576916","state":["exists","up"]},{"osd":1,"uuid":"313e9150-62ce-49f1-94b9-336bc0739e4e","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":12,"up_thru":17,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6810","nonce":3813827411},{"type":"v1","addr":"192.168.123.107:6811","nonce":3813827411}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6812","nonce":3813827411},{"type":"v1","addr":"192.168.123.107:6813","nonce":3813827411}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6816","nonce":3813827411},{"type":"v1","addr":"192.168.123.107:6817","nonce":3813827411}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6814","nonce":3813827411},{"type":"v1","addr":"192.168.123.107:6815","nonce":3813827411}]},"public_addr":"192.168.123.107:6811/3813827411","cluster_addr":"192.168.123.107:6813/3813827411","heartbeat_back_addr":"192.168.123.107:6817/3813827411","heartbeat_front_addr":"192.168.123.107:6815/3813827411","state":["exists","up"]},{"osd":2,"uuid":"d07c8bd9-87b1-4074-add0-71507e9620df","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":16,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6818","nonce":1431031109},{"type":"v1","addr":"192.168.123.107:6819","nonce":1431031109}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6820","nonce":1431031109},{"type":"v1","addr":"192.168.123.107:6821","nonce":1431031109}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6824","nonce":1431031109},{"type":"v1","addr":"192.168.123.107:6825","nonce":1431031109}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6822","nonce":1431031109},{"type":"v1","addr":"192.168.123.107:6823","nonce":1431031109}]},"public_addr":"192.168.123.107:6819/1431031109","cluster_addr":"192.168.123.107:6821/1431031109","heartbeat_back_addr":"192.168.123.107:6825/1431031109","heartbeat_front_addr":"192.168.123.107:6823/1431031109","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T13:13:32.234387+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T13:13:43.971888+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T13:13:54.234160+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.107:0/2631950073":"2026-03-11T13:13:11.423073+0000","192.168.123.107:6801/2892363095":"2026-03-11T13:13:11.423073+0000","192.168.123.107:6800/2892363095":"2026-03-11T13:13:11.423073+0000","192.168.123.107:0/538258474":"2026-03-11T13:13:11.423073+0000","192.168.123.107:0/2362313244":"2026-03-11T13:13:11.423073+0000","192.168.123.107:0/241309621":"2026-03-11T13:13:01.205715+0000","192.168.123.107:0/2936972974":"2026-03-11T13:13:01.205715+0000","192.168.123.107:6800/301384821":"2026-03-11T13:13:01.205715+0000","192.168.123.107:6801/301384821":"2026-03-11T13:13:01.205715+0000","192.168.123.107:0/1844477233":"2026-03-11T13:13:01.205715+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T13:14:01.341 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:01 vm07 ceph-mon[52048]: mgrmap e14: a(active, since 48s) 2026-03-10T13:14:01.341 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:01 vm07 ceph-mon[52048]: osdmap e19: 3 total, 3 up, 3 in 2026-03-10T13:14:01.341 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:01 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/2517309054' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T13:14:01.436 INFO:tasks.cephadm.ceph_manager.ceph:all up! 2026-03-10T13:14:01.436 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid bd98ed20-1c82-11f1-9239-ff903ae4ee6f -- ceph osd dump --format=json 2026-03-10T13:14:01.607 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/mon.a/config 2026-03-10T13:14:01.833 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:14:01.833 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":19,"fsid":"bd98ed20-1c82-11f1-9239-ff903ae4ee6f","created":"2026-03-10T13:12:49.860773+0000","modified":"2026-03-10T13:14:00.138958+0000","last_up_change":"2026-03-10T13:13:57.083601+0000","last_in_change":"2026-03-10T13:13:45.092552+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":8,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":3,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T13:13:57.550111+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"19","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":3,"score_stable":3,"optimal_score":1,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"62c85112-9da4-4845-b7c8-809946f80c39","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6802","nonce":546576916},{"type":"v1","addr":"192.168.123.107:6803","nonce":546576916}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6804","nonce":546576916},{"type":"v1","addr":"192.168.123.107:6805","nonce":546576916}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6808","nonce":546576916},{"type":"v1","addr":"192.168.123.107:6809","nonce":546576916}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6806","nonce":546576916},{"type":"v1","addr":"192.168.123.107:6807","nonce":546576916}]},"public_addr":"192.168.123.107:6803/546576916","cluster_addr":"192.168.123.107:6805/546576916","heartbeat_back_addr":"192.168.123.107:6809/546576916","heartbeat_front_addr":"192.168.123.107:6807/546576916","state":["exists","up"]},{"osd":1,"uuid":"313e9150-62ce-49f1-94b9-336bc0739e4e","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":12,"up_thru":17,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6810","nonce":3813827411},{"type":"v1","addr":"192.168.123.107:6811","nonce":3813827411}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6812","nonce":3813827411},{"type":"v1","addr":"192.168.123.107:6813","nonce":3813827411}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6816","nonce":3813827411},{"type":"v1","addr":"192.168.123.107:6817","nonce":3813827411}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6814","nonce":3813827411},{"type":"v1","addr":"192.168.123.107:6815","nonce":3813827411}]},"public_addr":"192.168.123.107:6811/3813827411","cluster_addr":"192.168.123.107:6813/3813827411","heartbeat_back_addr":"192.168.123.107:6817/3813827411","heartbeat_front_addr":"192.168.123.107:6815/3813827411","state":["exists","up"]},{"osd":2,"uuid":"d07c8bd9-87b1-4074-add0-71507e9620df","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":16,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6818","nonce":1431031109},{"type":"v1","addr":"192.168.123.107:6819","nonce":1431031109}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6820","nonce":1431031109},{"type":"v1","addr":"192.168.123.107:6821","nonce":1431031109}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6824","nonce":1431031109},{"type":"v1","addr":"192.168.123.107:6825","nonce":1431031109}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6822","nonce":1431031109},{"type":"v1","addr":"192.168.123.107:6823","nonce":1431031109}]},"public_addr":"192.168.123.107:6819/1431031109","cluster_addr":"192.168.123.107:6821/1431031109","heartbeat_back_addr":"192.168.123.107:6825/1431031109","heartbeat_front_addr":"192.168.123.107:6823/1431031109","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T13:13:32.234387+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T13:13:43.971888+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T13:13:54.234160+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.107:0/2631950073":"2026-03-11T13:13:11.423073+0000","192.168.123.107:6801/2892363095":"2026-03-11T13:13:11.423073+0000","192.168.123.107:6800/2892363095":"2026-03-11T13:13:11.423073+0000","192.168.123.107:0/538258474":"2026-03-11T13:13:11.423073+0000","192.168.123.107:0/2362313244":"2026-03-11T13:13:11.423073+0000","192.168.123.107:0/241309621":"2026-03-11T13:13:01.205715+0000","192.168.123.107:0/2936972974":"2026-03-11T13:13:01.205715+0000","192.168.123.107:6800/301384821":"2026-03-11T13:13:01.205715+0000","192.168.123.107:6801/301384821":"2026-03-11T13:13:01.205715+0000","192.168.123.107:0/1844477233":"2026-03-11T13:13:01.205715+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T13:14:02.002 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid bd98ed20-1c82-11f1-9239-ff903ae4ee6f -- ceph tell osd.0 flush_pg_stats 2026-03-10T13:14:02.002 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid bd98ed20-1c82-11f1-9239-ff903ae4ee6f -- ceph tell osd.1 flush_pg_stats 2026-03-10T13:14:02.002 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid bd98ed20-1c82-11f1-9239-ff903ae4ee6f -- ceph tell osd.2 flush_pg_stats 2026-03-10T13:14:02.281 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/mon.a/config 2026-03-10T13:14:02.296 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/mon.a/config 2026-03-10T13:14:02.308 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/mon.a/config 2026-03-10T13:14:02.319 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:02 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/1413682057' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T13:14:02.319 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:02 vm07 ceph-mon[52048]: pgmap v34: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:14:02.319 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:02 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/1486458738' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T13:14:02.696 INFO:teuthology.orchestra.run.vm07.stdout:51539607557 2026-03-10T13:14:02.697 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid bd98ed20-1c82-11f1-9239-ff903ae4ee6f -- ceph osd last-stat-seq osd.1 2026-03-10T13:14:02.737 INFO:teuthology.orchestra.run.vm07.stdout:34359738375 2026-03-10T13:14:02.737 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid bd98ed20-1c82-11f1-9239-ff903ae4ee6f -- ceph osd last-stat-seq osd.0 2026-03-10T13:14:02.741 INFO:teuthology.orchestra.run.vm07.stdout:68719476738 2026-03-10T13:14:02.741 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid bd98ed20-1c82-11f1-9239-ff903ae4ee6f -- ceph osd last-stat-seq osd.2 2026-03-10T13:14:02.955 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/mon.a/config 2026-03-10T13:14:02.972 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/mon.a/config 2026-03-10T13:14:03.189 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/mon.a/config 2026-03-10T13:14:03.316 INFO:teuthology.orchestra.run.vm07.stdout:51539607556 2026-03-10T13:14:03.375 INFO:teuthology.orchestra.run.vm07.stdout:34359738374 2026-03-10T13:14:03.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:03 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/19860986' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T13:14:03.502 INFO:teuthology.orchestra.run.vm07.stdout:68719476737 2026-03-10T13:14:03.538 INFO:tasks.cephadm.ceph_manager.ceph:need seq 51539607557 got 51539607556 for osd.1 2026-03-10T13:14:03.570 INFO:tasks.cephadm.ceph_manager.ceph:need seq 34359738375 got 34359738374 for osd.0 2026-03-10T13:14:03.679 INFO:tasks.cephadm.ceph_manager.ceph:need seq 68719476738 got 68719476737 for osd.2 2026-03-10T13:14:04.539 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid bd98ed20-1c82-11f1-9239-ff903ae4ee6f -- ceph osd last-stat-seq osd.1 2026-03-10T13:14:04.571 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid bd98ed20-1c82-11f1-9239-ff903ae4ee6f -- ceph osd last-stat-seq osd.0 2026-03-10T13:14:04.680 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid bd98ed20-1c82-11f1-9239-ff903ae4ee6f -- ceph osd last-stat-seq osd.2 2026-03-10T13:14:04.728 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/mon.a/config 2026-03-10T13:14:04.742 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:04 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/2513931084' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T13:14:04.742 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:04 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/3668945243' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T13:14:04.742 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:04 vm07 ceph-mon[52048]: pgmap v35: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:14:04.853 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/mon.a/config 2026-03-10T13:14:05.017 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/mon.a/config 2026-03-10T13:14:05.050 INFO:teuthology.orchestra.run.vm07.stdout:51539607557 2026-03-10T13:14:05.213 INFO:teuthology.orchestra.run.vm07.stdout:34359738375 2026-03-10T13:14:05.226 INFO:tasks.cephadm.ceph_manager.ceph:need seq 51539607557 got 51539607557 for osd.1 2026-03-10T13:14:05.226 DEBUG:teuthology.parallel:result is None 2026-03-10T13:14:05.357 INFO:teuthology.orchestra.run.vm07.stdout:68719476739 2026-03-10T13:14:05.405 INFO:tasks.cephadm.ceph_manager.ceph:need seq 34359738375 got 34359738375 for osd.0 2026-03-10T13:14:05.405 DEBUG:teuthology.parallel:result is None 2026-03-10T13:14:05.525 INFO:tasks.cephadm.ceph_manager.ceph:need seq 68719476738 got 68719476739 for osd.2 2026-03-10T13:14:05.525 DEBUG:teuthology.parallel:result is None 2026-03-10T13:14:05.525 INFO:tasks.cephadm.ceph_manager.ceph:waiting for clean 2026-03-10T13:14:05.525 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid bd98ed20-1c82-11f1-9239-ff903ae4ee6f -- ceph pg dump --format=json 2026-03-10T13:14:05.688 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/mon.a/config 2026-03-10T13:14:05.713 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:05 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/2319668147' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T13:14:05.714 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:05 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/1126782088' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T13:14:05.714 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:05 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/884893474' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T13:14:05.914 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:14:05.914 INFO:teuthology.orchestra.run.vm07.stderr:dumped all 2026-03-10T13:14:06.082 INFO:teuthology.orchestra.run.vm07.stdout:{"pg_ready":true,"pg_map":{"version":36,"stamp":"2026-03-10T13:14:05.524425+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":3,"num_osds":3,"num_per_pool_osds":3,"num_per_pool_omap_osds":3,"kb":62902272,"kb_used":82728,"kb_used_data":1828,"kb_used_omap":4,"kb_used_meta":80443,"kb_avail":62819544,"statfs":{"total":64411926528,"available":64327213056,"internally_reserved":0,"allocated":1871872,"data_stored":1519547,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":4770,"internal_metadata":82373982},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"4.000608"},"pg_stats":[{"pgid":"1.0","version":"18'32","reported_seq":57,"reported_epoch":19,"state":"active+clean","last_fresh":"2026-03-10T13:14:00.144561+0000","last_change":"2026-03-10T13:13:59.146120+0000","last_active":"2026-03-10T13:14:00.144561+0000","last_peered":"2026-03-10T13:14:00.144561+0000","last_clean":"2026-03-10T13:14:00.144561+0000","last_became_active":"2026-03-10T13:13:59.145640+0000","last_became_peered":"2026-03-10T13:13:59.145640+0000","last_unstale":"2026-03-10T13:14:00.144561+0000","last_undegraded":"2026-03-10T13:14:00.144561+0000","last_fullsized":"2026-03-10T13:14:00.144561+0000","mapping_epoch":17,"log_start":"0'0","ondisk_log_start":"0'0","created":17,"last_epoch_clean":18,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:13:58.124209+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:13:58.124209+0000","last_clean_scrub_stamp":"2026-03-10T13:13:58.124209+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:32:56.668594+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0,2],"acting":[1,0,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]}],"pool_stats":[{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":3}],"osd_stats":[{"osd":2,"up_from":16,"seq":68719476739,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27564,"kb_used_data":604,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939860,"statfs":{"total":21470642176,"available":21442416640,"internally_reserved":0,"allocated":618496,"data_stored":504131,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":12,"seq":51539607558,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27584,"kb_used_data":612,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939840,"statfs":{"total":21470642176,"available":21442396160,"internally_reserved":0,"allocated":626688,"data_stored":507708,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":8,"seq":34359738376,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27580,"kb_used_data":612,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939844,"statfs":{"total":21470642176,"available":21442400256,"internally_reserved":0,"allocated":626688,"data_stored":507708,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[1,2],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-10T13:14:06.083 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid bd98ed20-1c82-11f1-9239-ff903ae4ee6f -- ceph pg dump --format=json 2026-03-10T13:14:06.247 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/mon.a/config 2026-03-10T13:14:06.471 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:14:06.472 INFO:teuthology.orchestra.run.vm07.stderr:dumped all 2026-03-10T13:14:06.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:06 vm07 ceph-mon[52048]: pgmap v36: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:14:06.618 INFO:teuthology.orchestra.run.vm07.stdout:{"pg_ready":true,"pg_map":{"version":36,"stamp":"2026-03-10T13:14:05.524425+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":3,"num_osds":3,"num_per_pool_osds":3,"num_per_pool_omap_osds":3,"kb":62902272,"kb_used":82728,"kb_used_data":1828,"kb_used_omap":4,"kb_used_meta":80443,"kb_avail":62819544,"statfs":{"total":64411926528,"available":64327213056,"internally_reserved":0,"allocated":1871872,"data_stored":1519547,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":4770,"internal_metadata":82373982},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"4.000608"},"pg_stats":[{"pgid":"1.0","version":"18'32","reported_seq":57,"reported_epoch":19,"state":"active+clean","last_fresh":"2026-03-10T13:14:00.144561+0000","last_change":"2026-03-10T13:13:59.146120+0000","last_active":"2026-03-10T13:14:00.144561+0000","last_peered":"2026-03-10T13:14:00.144561+0000","last_clean":"2026-03-10T13:14:00.144561+0000","last_became_active":"2026-03-10T13:13:59.145640+0000","last_became_peered":"2026-03-10T13:13:59.145640+0000","last_unstale":"2026-03-10T13:14:00.144561+0000","last_undegraded":"2026-03-10T13:14:00.144561+0000","last_fullsized":"2026-03-10T13:14:00.144561+0000","mapping_epoch":17,"log_start":"0'0","ondisk_log_start":"0'0","created":17,"last_epoch_clean":18,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:13:58.124209+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:13:58.124209+0000","last_clean_scrub_stamp":"2026-03-10T13:13:58.124209+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:32:56.668594+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0,2],"acting":[1,0,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]}],"pool_stats":[{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":3}],"osd_stats":[{"osd":2,"up_from":16,"seq":68719476739,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27564,"kb_used_data":604,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939860,"statfs":{"total":21470642176,"available":21442416640,"internally_reserved":0,"allocated":618496,"data_stored":504131,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":12,"seq":51539607558,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27584,"kb_used_data":612,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939840,"statfs":{"total":21470642176,"available":21442396160,"internally_reserved":0,"allocated":626688,"data_stored":507708,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":8,"seq":34359738376,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27580,"kb_used_data":612,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939844,"statfs":{"total":21470642176,"available":21442400256,"internally_reserved":0,"allocated":626688,"data_stored":507708,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[1,2],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-10T13:14:06.618 INFO:tasks.cephadm.ceph_manager.ceph:clean! 2026-03-10T13:14:06.618 INFO:tasks.ceph:Waiting until ceph cluster ceph is healthy... 2026-03-10T13:14:06.618 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy 2026-03-10T13:14:06.618 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid bd98ed20-1c82-11f1-9239-ff903ae4ee6f -- ceph health --format=json 2026-03-10T13:14:06.781 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/mon.a/config 2026-03-10T13:14:07.010 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:14:07.011 INFO:teuthology.orchestra.run.vm07.stdout:{"status":"HEALTH_OK","checks":{},"mutes":[]} 2026-03-10T13:14:07.157 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy done 2026-03-10T13:14:07.157 INFO:tasks.cephadm:Setup complete, yielding 2026-03-10T13:14:07.157 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-10T13:14:07.159 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm07.local 2026-03-10T13:14:07.159 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid bd98ed20-1c82-11f1-9239-ff903ae4ee6f -- bash -c 'ceph osd pool create foo' 2026-03-10T13:14:07.310 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/mon.a/config 2026-03-10T13:14:07.426 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:07 vm07 ceph-mon[52048]: from='client.14252 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:14:07.426 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:07 vm07 ceph-mon[52048]: from='client.14254 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:14:07.426 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:07 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/2103913071' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T13:14:08.379 INFO:teuthology.orchestra.run.vm07.stderr:pool 'foo' created 2026-03-10T13:14:08.534 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid bd98ed20-1c82-11f1-9239-ff903ae4ee6f -- bash -c 'rbd pool init foo' 2026-03-10T13:14:08.688 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/mon.a/config 2026-03-10T13:14:08.708 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:08 vm07 ceph-mon[52048]: pgmap v37: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:14:08.708 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:08 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/2973613522' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "foo"}]: dispatch 2026-03-10T13:14:09.840 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:09 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/2973613522' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "foo"}]': finished 2026-03-10T13:14:09.840 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:09 vm07 ceph-mon[52048]: osdmap e20: 3 total, 3 up, 3 in 2026-03-10T13:14:09.840 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:09 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/1726501108' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "foo","app": "rbd"}]: dispatch 2026-03-10T13:14:10.840 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:10 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/1726501108' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "foo","app": "rbd"}]': finished 2026-03-10T13:14:10.840 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:10 vm07 ceph-mon[52048]: osdmap e21: 3 total, 3 up, 3 in 2026-03-10T13:14:10.840 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:10 vm07 ceph-mon[52048]: pgmap v40: 33 pgs: 12 creating+peering, 20 unknown, 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:14:11.551 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid bd98ed20-1c82-11f1-9239-ff903ae4ee6f -- bash -c 'ceph orch apply iscsi foo u p' 2026-03-10T13:14:11.705 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/mon.a/config 2026-03-10T13:14:11.726 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:11 vm07 ceph-mon[52048]: osdmap e22: 3 total, 3 up, 3 in 2026-03-10T13:14:11.931 INFO:teuthology.orchestra.run.vm07.stdout:Scheduled iscsi.foo update... 2026-03-10T13:14:12.087 INFO:teuthology.run_tasks:Running task workunit... 2026-03-10T13:14:12.091 INFO:tasks.workunit:Pulling workunits from ref 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b 2026-03-10T13:14:12.091 INFO:tasks.workunit:Making a separate scratch dir for every client... 2026-03-10T13:14:12.091 DEBUG:teuthology.orchestra.run.vm07:> stat -- /home/ubuntu/cephtest/mnt.0 2026-03-10T13:14:12.109 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T13:14:12.109 INFO:teuthology.orchestra.run.vm07.stderr:stat: cannot statx '/home/ubuntu/cephtest/mnt.0': No such file or directory 2026-03-10T13:14:12.109 DEBUG:teuthology.orchestra.run.vm07:> mkdir -- /home/ubuntu/cephtest/mnt.0 2026-03-10T13:14:12.178 INFO:tasks.workunit:Created dir /home/ubuntu/cephtest/mnt.0 2026-03-10T13:14:12.178 DEBUG:teuthology.orchestra.run.vm07:> cd -- /home/ubuntu/cephtest/mnt.0 && mkdir -- client.0 2026-03-10T13:14:12.235 INFO:tasks.workunit:timeout=3h 2026-03-10T13:14:12.235 INFO:tasks.workunit:cleanup=True 2026-03-10T13:14:12.236 DEBUG:teuthology.orchestra.run.vm07:> rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone https://github.com/kshtsk/ceph.git /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0 && git checkout 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b 2026-03-10T13:14:12.291 INFO:tasks.workunit.client.0.vm07.stderr:Cloning into '/home/ubuntu/cephtest/clone.client.0'... 2026-03-10T13:14:12.572 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:12 vm07 ceph-mon[52048]: osdmap e23: 3 total, 3 up, 3 in 2026-03-10T13:14:12.572 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:12 vm07 ceph-mon[52048]: pgmap v43: 33 pgs: 12 active+clean, 12 creating+peering, 9 unknown; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:14:12.572 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:12 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:14:12.572 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:12 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:14:12.572 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:12 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:14:12.572 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:12 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:14:12.572 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:12 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:14:12.572 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:12 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm07.vvwqyx", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T13:14:12.572 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:12 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm07.vvwqyx", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-10T13:14:12.572 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:12 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:14:13.572 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:13 vm07 ceph-mon[52048]: from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "foo", "api_user": "u", "api_password": "p", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:14:13.572 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:13 vm07 ceph-mon[52048]: Saving service iscsi.foo spec with placement count:1 2026-03-10T13:14:13.572 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:13 vm07 ceph-mon[52048]: Deploying daemon iscsi.foo.vm07.vvwqyx on vm07 2026-03-10T13:14:13.572 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:13 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:14:13.572 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:13 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:14:13.572 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:13 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:14:13.572 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:13 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:14:13.572 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:13 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:14:13.572 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:13 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:14:13.572 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:13 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:14:13.572 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:13 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:14:13.572 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:13 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T13:14:13.572 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:13 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T13:14:13.572 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:13 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:14:13.573 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:13 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm07"}]: dispatch 2026-03-10T13:14:13.573 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:13 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:14:13.573 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:13 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:14:13.573 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:13 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:14:13.573 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:13 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:14:13.573 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:13 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:14:13.573 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:13 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/3771342443' entity='client.iscsi.foo.vm07.vvwqyx' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T13:14:14.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:14 vm07 ceph-mon[52048]: Checking pool "foo" exists for service iscsi.foo 2026-03-10T13:14:14.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:14 vm07 ceph-mon[52048]: Metadata not up to date on all hosts. Skipping non agent specs 2026-03-10T13:14:14.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:14 vm07 ceph-mon[52048]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T13:14:14.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:14 vm07 ceph-mon[52048]: Adding iSCSI gateway http://:@192.168.123.107:5000 to Dashboard 2026-03-10T13:14:14.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:14 vm07 ceph-mon[52048]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T13:14:14.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:14 vm07 ceph-mon[52048]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm07"}]: dispatch 2026-03-10T13:14:14.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:14 vm07 ceph-mon[52048]: Metadata not up to date on all hosts. Skipping non agent specs 2026-03-10T13:14:14.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:14 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/2382720265' entity='client.iscsi.foo.vm07.vvwqyx' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/2631950073"}]: dispatch 2026-03-10T13:14:14.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:14 vm07 ceph-mon[52048]: pgmap v44: 33 pgs: 21 active+clean, 12 creating+peering; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 398 B/s wr, 0 op/s 2026-03-10T13:14:14.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:14 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:14:15.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:15 vm07 ceph-mon[52048]: Detected new or changed devices on vm07 2026-03-10T13:14:15.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:15 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/2382720265' entity='client.iscsi.foo.vm07.vvwqyx' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/2631950073"}]': finished 2026-03-10T13:14:15.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:15 vm07 ceph-mon[52048]: osdmap e24: 3 total, 3 up, 3 in 2026-03-10T13:14:15.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:15 vm07 ceph-mon[52048]: mgrmap e15: a(active, since 62s) 2026-03-10T13:14:15.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:15 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:14:15.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:15 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:14:15.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:15 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:14:15.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:15 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:14:15.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:15 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:14:15.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:15 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:14:15.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:15 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:14:15.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:15 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/1307113882' entity='client.iscsi.foo.vm07.vvwqyx' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/2892363095"}]: dispatch 2026-03-10T13:14:16.840 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:16 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/1307113882' entity='client.iscsi.foo.vm07.vvwqyx' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/2892363095"}]': finished 2026-03-10T13:14:16.840 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:16 vm07 ceph-mon[52048]: osdmap e25: 3 total, 3 up, 3 in 2026-03-10T13:14:16.840 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:16 vm07 ceph-mon[52048]: pgmap v47: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 398 B/s wr, 0 op/s 2026-03-10T13:14:16.840 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:16 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/1424395504' entity='client.iscsi.foo.vm07.vvwqyx' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6800/2892363095"}]: dispatch 2026-03-10T13:14:17.840 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:17 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/1424395504' entity='client.iscsi.foo.vm07.vvwqyx' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6800/2892363095"}]': finished 2026-03-10T13:14:17.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:17 vm07 ceph-mon[52048]: osdmap e26: 3 total, 3 up, 3 in 2026-03-10T13:14:17.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:17 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:14:17.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:17 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/4125014922' entity='client.iscsi.foo.vm07.vvwqyx' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/538258474"}]: dispatch 2026-03-10T13:14:18.840 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:18 vm07 ceph-mon[52048]: pgmap v49: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s 2026-03-10T13:14:18.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:18 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/4125014922' entity='client.iscsi.foo.vm07.vvwqyx' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/538258474"}]': finished 2026-03-10T13:14:18.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:18 vm07 ceph-mon[52048]: osdmap e27: 3 total, 3 up, 3 in 2026-03-10T13:14:18.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:18 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/2666004726' entity='client.iscsi.foo.vm07.vvwqyx' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/2362313244"}]: dispatch 2026-03-10T13:14:19.840 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:19 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/2666004726' entity='client.iscsi.foo.vm07.vvwqyx' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/2362313244"}]': finished 2026-03-10T13:14:19.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:19 vm07 ceph-mon[52048]: osdmap e28: 3 total, 3 up, 3 in 2026-03-10T13:14:19.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:19 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/3004536693' entity='client.iscsi.foo.vm07.vvwqyx' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/241309621"}]: dispatch 2026-03-10T13:14:20.840 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:20 vm07 ceph-mon[52048]: pgmap v52: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 252 B/s wr, 3 op/s 2026-03-10T13:14:20.840 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:20 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/3004536693' entity='client.iscsi.foo.vm07.vvwqyx' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/241309621"}]': finished 2026-03-10T13:14:20.840 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:20 vm07 ceph-mon[52048]: osdmap e29: 3 total, 3 up, 3 in 2026-03-10T13:14:20.840 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:20 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/3727047130' entity='client.iscsi.foo.vm07.vvwqyx' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/2936972974"}]: dispatch 2026-03-10T13:14:21.565 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:21 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/3727047130' entity='client.iscsi.foo.vm07.vvwqyx' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/2936972974"}]': finished 2026-03-10T13:14:21.566 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:21 vm07 ceph-mon[52048]: osdmap e30: 3 total, 3 up, 3 in 2026-03-10T13:14:21.566 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:21 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/3776142273' entity='client.iscsi.foo.vm07.vvwqyx' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6800/301384821"}]: dispatch 2026-03-10T13:14:21.566 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:21 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/3776142273' entity='client.iscsi.foo.vm07.vvwqyx' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6800/301384821"}]': finished 2026-03-10T13:14:21.566 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:21 vm07 ceph-mon[52048]: osdmap e31: 3 total, 3 up, 3 in 2026-03-10T13:14:21.566 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:21 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/346433829' entity='client.iscsi.foo.vm07.vvwqyx' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/301384821"}]: dispatch 2026-03-10T13:14:21.566 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:21 vm07 ceph-mon[52048]: pgmap v56: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 257 B/s wr, 3 op/s 2026-03-10T13:14:23.609 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:23 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/346433829' entity='client.iscsi.foo.vm07.vvwqyx' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/301384821"}]': finished 2026-03-10T13:14:23.609 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:23 vm07 ceph-mon[52048]: osdmap e32: 3 total, 3 up, 3 in 2026-03-10T13:14:23.609 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:23 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/1774505223' entity='client.iscsi.foo.vm07.vvwqyx' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1844477233"}]: dispatch 2026-03-10T13:14:24.590 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:24 vm07 ceph-mon[52048]: from='client.14270 -' entity='client.iscsi.foo.vm07.vvwqyx' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:14:24.590 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:24 vm07 ceph-mon[52048]: from='client.? 192.168.123.107:0/1774505223' entity='client.iscsi.foo.vm07.vvwqyx' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1844477233"}]': finished 2026-03-10T13:14:24.590 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:24 vm07 ceph-mon[52048]: osdmap e33: 3 total, 3 up, 3 in 2026-03-10T13:14:24.590 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:24 vm07 ceph-mon[52048]: pgmap v59: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T13:14:26.090 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:25 vm07 ceph-mon[52048]: pgmap v60: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T13:14:28.090 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:27 vm07 ceph-mon[52048]: pgmap v61: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 802 B/s rd, 0 op/s 2026-03-10T13:14:30.090 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:29 vm07 ceph-mon[52048]: pgmap v62: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:14:32.090 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:31 vm07 ceph-mon[52048]: pgmap v63: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T13:14:33.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:33 vm07 ceph-mon[52048]: from='client.14270 -' entity='client.iscsi.foo.vm07.vvwqyx' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:14:33.841 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:33 vm07 ceph-mon[52048]: pgmap v64: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 992 B/s rd, 0 op/s 2026-03-10T13:14:36.091 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:35 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:14:36.091 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:35 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:14:36.091 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:35 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:14:36.091 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:35 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:14:36.091 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:35 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:14:36.091 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:35 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:14:36.091 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:35 vm07 ceph-mon[52048]: pgmap v65: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:14:38.090 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:37 vm07 ceph-mon[52048]: pgmap v66: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:14:40.090 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:39 vm07 ceph-mon[52048]: pgmap v67: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:14:42.090 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:41 vm07 ceph-mon[52048]: pgmap v68: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:14:44.090 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:43 vm07 ceph-mon[52048]: from='client.14270 -' entity='client.iscsi.foo.vm07.vvwqyx' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:14:44.091 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:43 vm07 ceph-mon[52048]: pgmap v69: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:14:46.090 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:45 vm07 ceph-mon[52048]: pgmap v70: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:14:48.090 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:47 vm07 ceph-mon[52048]: pgmap v71: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:14:50.090 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:49 vm07 ceph-mon[52048]: pgmap v72: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:14:52.090 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:51 vm07 ceph-mon[52048]: pgmap v73: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:14:54.090 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:53 vm07 ceph-mon[52048]: from='client.14270 -' entity='client.iscsi.foo.vm07.vvwqyx' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:14:54.090 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:53 vm07 ceph-mon[52048]: pgmap v74: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:14:56.340 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:55 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:14:56.341 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:55 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:14:56.341 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:55 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' 2026-03-10T13:14:56.341 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:55 vm07 ceph-mon[52048]: pgmap v75: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:14:57.245 INFO:tasks.workunit.client.0.vm07.stderr:Note: switching to '75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b'. 2026-03-10T13:14:57.245 INFO:tasks.workunit.client.0.vm07.stderr: 2026-03-10T13:14:57.245 INFO:tasks.workunit.client.0.vm07.stderr:You are in 'detached HEAD' state. You can look around, make experimental 2026-03-10T13:14:57.245 INFO:tasks.workunit.client.0.vm07.stderr:changes and commit them, and you can discard any commits you make in this 2026-03-10T13:14:57.245 INFO:tasks.workunit.client.0.vm07.stderr:state without impacting any branches by switching back to a branch. 2026-03-10T13:14:57.245 INFO:tasks.workunit.client.0.vm07.stderr: 2026-03-10T13:14:57.245 INFO:tasks.workunit.client.0.vm07.stderr:If you want to create a new branch to retain commits you create, you may 2026-03-10T13:14:57.245 INFO:tasks.workunit.client.0.vm07.stderr:do so (now or later) by using -c with the switch command. Example: 2026-03-10T13:14:57.245 INFO:tasks.workunit.client.0.vm07.stderr: 2026-03-10T13:14:57.245 INFO:tasks.workunit.client.0.vm07.stderr: git switch -c 2026-03-10T13:14:57.245 INFO:tasks.workunit.client.0.vm07.stderr: 2026-03-10T13:14:57.245 INFO:tasks.workunit.client.0.vm07.stderr:Or undo this operation with: 2026-03-10T13:14:57.245 INFO:tasks.workunit.client.0.vm07.stderr: 2026-03-10T13:14:57.245 INFO:tasks.workunit.client.0.vm07.stderr: git switch - 2026-03-10T13:14:57.245 INFO:tasks.workunit.client.0.vm07.stderr: 2026-03-10T13:14:57.245 INFO:tasks.workunit.client.0.vm07.stderr:Turn off this advice by setting config variable advice.detachedHead to false 2026-03-10T13:14:57.245 INFO:tasks.workunit.client.0.vm07.stderr: 2026-03-10T13:14:57.245 INFO:tasks.workunit.client.0.vm07.stderr:HEAD is now at 75a68fd8ca3 qa/suites/orch/cephadm/osds: drop nvme_loop task 2026-03-10T13:14:57.250 DEBUG:teuthology.orchestra.run.vm07:> cd -- /home/ubuntu/cephtest/clone.client.0/qa/workunits && if test -e Makefile ; then make ; fi && find -executable -type f -printf '%P\0' >/home/ubuntu/cephtest/workunits.list.client.0 2026-03-10T13:14:57.305 INFO:tasks.workunit.client.0.vm07.stdout:for d in direct_io fs ; do ( cd $d ; make all ) ; done 2026-03-10T13:14:57.306 INFO:tasks.workunit.client.0.vm07.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/direct_io' 2026-03-10T13:14:57.307 INFO:tasks.workunit.client.0.vm07.stdout:cc -Wall -Wextra -D_GNU_SOURCE direct_io_test.c -o direct_io_test 2026-03-10T13:14:57.348 INFO:tasks.workunit.client.0.vm07.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_sync_io.c -o test_sync_io 2026-03-10T13:14:57.379 INFO:tasks.workunit.client.0.vm07.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_short_dio_read.c -o test_short_dio_read 2026-03-10T13:14:57.408 INFO:tasks.workunit.client.0.vm07.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/direct_io' 2026-03-10T13:14:57.410 INFO:tasks.workunit.client.0.vm07.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/fs' 2026-03-10T13:14:57.410 INFO:tasks.workunit.client.0.vm07.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_o_trunc.c -o test_o_trunc 2026-03-10T13:14:57.436 INFO:tasks.workunit.client.0.vm07.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/fs' 2026-03-10T13:14:57.438 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-10T13:14:57.438 DEBUG:teuthology.orchestra.run.vm07:> dd if=/home/ubuntu/cephtest/workunits.list.client.0 of=/dev/stdout 2026-03-10T13:14:57.493 INFO:tasks.workunit:Running workunits matching cephadm/test_iscsi_pids_limit.sh on client.0... 2026-03-10T13:14:57.493 INFO:tasks.workunit:Running workunit cephadm/test_iscsi_pids_limit.sh... 2026-03-10T13:14:57.494 DEBUG:teuthology.orchestra.run.vm07:workunit test cephadm/test_iscsi_pids_limit.sh> mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_iscsi_pids_limit.sh 2026-03-10T13:14:57.550 INFO:tasks.workunit.client.0.vm07.stderr:++ sudo podman ps -qa --filter=name=iscsi 2026-03-10T13:14:57.580 INFO:tasks.workunit.client.0.vm07.stderr:+ ISCSI_CONT_IDS='24dec31e5ee6 2026-03-10T13:14:57.580 INFO:tasks.workunit.client.0.vm07.stderr:012da22a686f' 2026-03-10T13:14:57.580 INFO:tasks.workunit.client.0.vm07.stderr:++ echo 24dec31e5ee6 012da22a686f 2026-03-10T13:14:57.580 INFO:tasks.workunit.client.0.vm07.stderr:++ wc -w 2026-03-10T13:14:57.585 INFO:tasks.workunit.client.0.vm07.stderr:+ CONT_COUNT=2 2026-03-10T13:14:57.585 INFO:tasks.workunit.client.0.vm07.stderr:+ test 2 -eq 2 2026-03-10T13:14:57.585 INFO:tasks.workunit.client.0.vm07.stderr:+ for i in ${ISCSI_CONT_IDS} 2026-03-10T13:14:57.585 INFO:tasks.workunit.client.0.vm07.stderr:++ sudo podman exec 24dec31e5ee6 cat /sys/fs/cgroup/pids/pids.max 2026-03-10T13:14:57.630 INFO:tasks.workunit.client.0.vm07.stderr:cat: /sys/fs/cgroup/pids/pids.max: No such file or directory 2026-03-10T13:14:57.680 INFO:tasks.workunit.client.0.vm07.stderr:+ '[' ']' 2026-03-10T13:14:57.680 INFO:tasks.workunit.client.0.vm07.stderr:++ sudo podman exec 24dec31e5ee6 cat /sys/fs/cgroup/pids.max 2026-03-10T13:14:57.772 INFO:tasks.workunit.client.0.vm07.stderr:+ '[' max ']' 2026-03-10T13:14:57.772 INFO:tasks.workunit.client.0.vm07.stderr:++ sudo podman exec 24dec31e5ee6 cat /sys/fs/cgroup/pids.max 2026-03-10T13:14:57.810 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:57 vm07 ceph-mon[52048]: pgmap v76: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:14:57.861 INFO:tasks.workunit.client.0.vm07.stderr:+ pid_limit=max 2026-03-10T13:14:57.861 INFO:tasks.workunit.client.0.vm07.stderr:+ test max == max 2026-03-10T13:14:57.862 INFO:tasks.workunit.client.0.vm07.stderr:+ for i in ${ISCSI_CONT_IDS} 2026-03-10T13:14:57.862 INFO:tasks.workunit.client.0.vm07.stderr:++ sudo podman exec 012da22a686f cat /sys/fs/cgroup/pids/pids.max 2026-03-10T13:14:57.899 INFO:tasks.workunit.client.0.vm07.stderr:cat: /sys/fs/cgroup/pids/pids.max: No such file or directory 2026-03-10T13:14:57.949 INFO:tasks.workunit.client.0.vm07.stderr:+ '[' ']' 2026-03-10T13:14:57.949 INFO:tasks.workunit.client.0.vm07.stderr:++ sudo podman exec 012da22a686f cat /sys/fs/cgroup/pids.max 2026-03-10T13:14:58.036 INFO:tasks.workunit.client.0.vm07.stderr:+ '[' max ']' 2026-03-10T13:14:58.036 INFO:tasks.workunit.client.0.vm07.stderr:++ sudo podman exec 012da22a686f cat /sys/fs/cgroup/pids.max 2026-03-10T13:14:58.126 INFO:tasks.workunit.client.0.vm07.stderr:+ pid_limit=max 2026-03-10T13:14:58.126 INFO:tasks.workunit.client.0.vm07.stderr:+ test max == max 2026-03-10T13:14:58.126 INFO:tasks.workunit.client.0.vm07.stderr:+ for i in ${ISCSI_CONT_IDS} 2026-03-10T13:14:58.126 INFO:tasks.workunit.client.0.vm07.stderr:+ sudo podman exec 24dec31e5ee6 /bin/sh -c 'for j in {0..20000}; do sleep 300 & done' 2026-03-10T13:15:00.099 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:14:59 vm07 ceph-mon[52048]: pgmap v77: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:15:02.091 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:15:01 vm07 ceph-mon[52048]: pgmap v78: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 851 B/s rd, 0 op/s 2026-03-10T13:15:04.090 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:15:03 vm07 ceph-mon[52048]: from='client.14270 -' entity='client.iscsi.foo.vm07.vvwqyx' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:15:04.090 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:15:03 vm07 ceph-mon[52048]: pgmap v79: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:15:06.090 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:15:05 vm07 ceph-mon[52048]: pgmap v80: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 851 B/s rd, 0 op/s 2026-03-10T13:15:08.091 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:15:07 vm07 ceph-mon[52048]: pgmap v81: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 851 B/s rd, 0 op/s 2026-03-10T13:15:10.090 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:15:09 vm07 ceph-mon[52048]: pgmap v82: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:15:10.236 INFO:tasks.workunit.client.0.vm07.stderr:+ for i in ${ISCSI_CONT_IDS} 2026-03-10T13:15:10.236 INFO:tasks.workunit.client.0.vm07.stderr:+ sudo podman exec 012da22a686f /bin/sh -c 'for j in {0..20000}; do sleep 300 & done' 2026-03-10T13:15:12.101 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:15:11 vm07 ceph-mon[52048]: pgmap v83: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:15:13.775 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:15:13 vm07 ceph-mon[52048]: from='client.14270 -' entity='client.iscsi.foo.vm07.vvwqyx' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:15:13.776 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:15:13 vm07 ceph-mon[52048]: pgmap v84: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:15:50.455 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:15:49 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0[60586]: 2026-03-10T13:15:49.353+0000 7fa220012640 -1 osd.0 33 heartbeat_check: no reply from 192.168.123.107:6814 osd.1 since back 2026-03-10T13:15:45.445563+0000 front 2026-03-10T13:15:12.546895+0000 (oldest deadline 2026-03-10T13:15:39.884064+0000) 2026-03-10T13:15:50.455 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:15:49 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0[60586]: 2026-03-10T13:15:49.354+0000 7fa220012640 -1 osd.0 33 heartbeat_check: no reply from 192.168.123.107:6822 osd.2 since back 2026-03-10T13:15:12.546952+0000 front 2026-03-10T13:15:47.234454+0000 (oldest deadline 2026-03-10T13:15:39.884064+0000) 2026-03-10T13:15:50.715 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:15:50 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2[67090]: 2026-03-10T13:15:49.325+0000 7f616bd67640 -1 osd.2 33 heartbeat_check: no reply from 192.168.123.107:6806 osd.0 since back 2026-03-10T13:15:49.326681+0000 front 2026-03-10T13:15:11.971367+0000 (oldest deadline 2026-03-10T13:15:35.569317+0000) 2026-03-10T13:15:50.715 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:15:50 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2[67090]: 2026-03-10T13:15:49.325+0000 7f616bd67640 -1 osd.2 33 heartbeat_check: no reply from 192.168.123.107:6814 osd.1 since back 2026-03-10T13:15:11.971347+0000 front 2026-03-10T13:15:11.971392+0000 (oldest deadline 2026-03-10T13:15:35.569317+0000) 2026-03-10T13:15:56.099 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:15:54 vm07 ceph-mon[52048]: from='mgr.14150 192.168.123.107:0/2117842140' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:15:57.054 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:15:55 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0[60586]: 2026-03-10T13:15:52.500+0000 7fa220012640 -1 osd.0 33 heartbeat_check: no reply from 192.168.123.107:6814 osd.1 since back 2026-03-10T13:15:45.445563+0000 front 2026-03-10T13:15:12.546895+0000 (oldest deadline 2026-03-10T13:15:39.884064+0000) 2026-03-10T13:15:58.001 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:15:55 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0[60586]: 2026-03-10T13:15:52.614+0000 7fa220012640 -1 osd.0 33 heartbeat_check: no reply from 192.168.123.107:6822 osd.2 since back 2026-03-10T13:15:12.546952+0000 front 2026-03-10T13:15:47.234454+0000 (oldest deadline 2026-03-10T13:15:39.884064+0000) 2026-03-10T13:16:01.089 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:15:58 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2[67090]: 2026-03-10T13:15:55.569+0000 7f616bd67640 -1 osd.2 33 heartbeat_check: no reply from 192.168.123.107:6806 osd.0 since back 2026-03-10T13:15:49.326681+0000 front 2026-03-10T13:15:11.971367+0000 (oldest deadline 2026-03-10T13:15:35.569317+0000) 2026-03-10T13:16:01.463 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:15:59 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2[67090]: 2026-03-10T13:15:56.907+0000 7f616bd67640 -1 osd.2 33 heartbeat_check: no reply from 192.168.123.107:6814 osd.1 since back 2026-03-10T13:15:52.309967+0000 front 2026-03-10T13:15:11.971392+0000 (oldest deadline 2026-03-10T13:15:35.569317+0000) 2026-03-10T13:16:02.033 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 10 13:16:01 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-1[63991]: 2026-03-10T13:16:00.697+0000 7f4698517640 -1 osd.1 33 heartbeat_check: no reply from 192.168.123.107:6806 osd.0 since back 2026-03-10T13:15:21.605154+0000 front 2026-03-10T13:15:52.024500+0000 (oldest deadline 2026-03-10T13:15:56.825566+0000) 2026-03-10T13:16:02.034 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 10 13:16:01 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-1[63991]: 2026-03-10T13:16:01.199+0000 7f4698517640 -1 osd.1 33 heartbeat_check: no reply from 192.168.123.107:6822 osd.2 since back 2026-03-10T13:15:45.481350+0000 front 2026-03-10T13:16:00.515154+0000 (oldest deadline 2026-03-10T13:15:56.825566+0000) 2026-03-10T13:16:11.221 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:16:08 vm07 systemd[1]: ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@mgr.a.service: A process of this unit has been killed by the OOM killer. 2026-03-10T13:16:11.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:16:11 vm07 ceph-mon[52048]: pgmap v85: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 775 B/s rd, 0 op/s 2026-03-10T13:16:11.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:16:11 vm07 ceph-mon[52048]: pgmap v86: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 254 B/s rd, 0 op/s 2026-03-10T13:16:11.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:16:11 vm07 ceph-mon[52048]: osd.0 reported failed by osd.1 2026-03-10T13:16:11.591 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:16:11 vm07 ceph-mon[52048]: osd.2 reported failed by osd.1 2026-03-10T13:16:11.888 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:16:11 vm07 podman[96639]: 2026-03-10 13:16:11.723356752 +0000 UTC m=+0.063560805 container died 7915ba879fdfc0daa333ca484a93017c6427175e64ec07dbc91fd9557105e62b (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) 2026-03-10T13:16:12.310 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:16:12 vm07 ceph-mon[52048]: osd.0 reported failed by osd.2 2026-03-10T13:16:12.310 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:16:12 vm07 ceph-mon[52048]: osd.0 failed (root=default,host=vm07) (2 reporters from different osd after 37.037757 >= grace 20.000000) 2026-03-10T13:16:12.310 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:16:12 vm07 ceph-mon[52048]: osd.1 reported failed by osd.2 2026-03-10T13:16:12.310 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:16:12 vm07 ceph-mon[52048]: osd.0 failure report canceled by osd.2 2026-03-10T13:16:12.310 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:16:12 vm07 ceph-mon[52048]: osd.2 failure report canceled by osd.0 2026-03-10T13:16:12.310 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:16:12 vm07 ceph-mon[52048]: osd.1 reported failed by osd.0 2026-03-10T13:16:12.310 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:16:12 vm07 ceph-mon[52048]: osd.1 failed (root=default,host=vm07) (2 reporters from different osd after 37.038443 >= grace 20.000000) 2026-03-10T13:16:12.310 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:16:12 vm07 ceph-mon[52048]: osd.1 failure report canceled by osd.0 2026-03-10T13:16:12.310 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:16:12 vm07 ceph-mon[52048]: osd.0 failure report canceled by osd.1 2026-03-10T13:16:12.310 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:16:12 vm07 ceph-mon[52048]: osd.2 failure report canceled by osd.1 2026-03-10T13:16:12.310 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:16:12 vm07 ceph-mon[52048]: osd.1 failure report canceled by osd.2 2026-03-10T13:16:12.311 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:16:11 vm07 podman[96639]: 2026-03-10 13:16:11.964667038 +0000 UTC m=+0.304871080 container remove 7915ba879fdfc0daa333ca484a93017c6427175e64ec07dbc91fd9557105e62b (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) 2026-03-10T13:16:12.311 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:16:11 vm07 systemd[1]: ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@mgr.a.service: Main process exited, code=exited, status=137/n/a 2026-03-10T13:16:12.590 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:16:12 vm07 systemd[1]: ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@mgr.a.service: Failed with result 'exit-code'. 2026-03-10T13:16:12.590 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:16:12 vm07 systemd[1]: ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@mgr.a.service: Consumed 22.352s CPU time. 2026-03-10T13:16:13.332 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:16:13 vm07 ceph-mon[52048]: Health check failed: 2 osds down (OSD_DOWN) 2026-03-10T13:16:13.332 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:16:13 vm07 ceph-mon[52048]: osdmap e34: 3 total, 1 up, 3 in 2026-03-10T13:16:14.105 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 10 13:16:13 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-1[63991]: 2026-03-10T13:16:13.831+0000 7f4696d14640 -1 osd.1 35 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T13:16:14.340 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:16:14 vm07 ceph-mon[52048]: osdmap e35: 3 total, 1 up, 3 in 2026-03-10T13:16:14.340 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:16:14 vm07 ceph-mon[52048]: Monitor daemon marked osd.1 down, but it is still running 2026-03-10T13:16:14.340 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:16:14 vm07 ceph-mon[52048]: map e35 wrongly marked me down at e34 2026-03-10T13:16:14.340 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:16:14 vm07 ceph-mon[52048]: osd.1 marked itself dead as of e35 2026-03-10T13:16:15.341 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:16:14 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0[60586]: 2026-03-10T13:16:14.864+0000 7fa21e80f640 -1 osd.0 36 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T13:16:15.843 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:16:15 vm07 ceph-mon[52048]: osd.1 [v2:192.168.123.107:6810/3813827411,v1:192.168.123.107:6811/3813827411] boot 2026-03-10T13:16:15.843 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:16:15 vm07 ceph-mon[52048]: osdmap e36: 3 total, 2 up, 3 in 2026-03-10T13:16:15.843 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:16:15 vm07 ceph-mon[52048]: osd.0 marked itself dead as of e36 2026-03-10T13:16:15.843 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:16:15 vm07 ceph-mon[52048]: Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T13:16:15.844 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:16:15 vm07 ceph-mon[52048]: Cluster is now healthy 2026-03-10T13:16:16.845 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:16:16 vm07 ceph-mon[52048]: osd.0 [v2:192.168.123.107:6802/546576916,v1:192.168.123.107:6803/546576916] boot 2026-03-10T13:16:16.845 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:16:16 vm07 ceph-mon[52048]: osdmap e37: 3 total, 3 up, 3 in 2026-03-10T13:16:16.845 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:16:16 vm07 ceph-mon[52048]: osdmap e38: 3 total, 3 up, 3 in 2026-03-10T13:16:23.423 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:16:21 vm07 ceph-mon[52048]: Monitor daemon marked osd.0 down, but it is still running 2026-03-10T13:16:23.423 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:16:21 vm07 ceph-mon[52048]: map e36 wrongly marked me down at e34 2026-03-10T13:16:23.456 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:16:22 vm07 systemd[1]: ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@mgr.a.service: Scheduled restart job, restart counter is at 1. 2026-03-10T13:16:23.456 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:16:22 vm07 systemd[1]: Stopped Ceph mgr.a for bd98ed20-1c82-11f1-9239-ff903ae4ee6f. 2026-03-10T13:16:23.456 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:16:22 vm07 systemd[1]: ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@mgr.a.service: Consumed 22.352s CPU time. 2026-03-10T13:16:23.456 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:16:22 vm07 systemd[1]: Starting Ceph mgr.a for bd98ed20-1c82-11f1-9239-ff903ae4ee6f... 2026-03-10T13:16:36.621 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:16:34 vm07 systemd[1]: ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@osd.2.service: A process of this unit has been killed by the OOM killer. 2026-03-10T13:16:40.371 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:16:39 vm07 ceph-mon[52048]: Manager daemon a is unresponsive. No standby daemons available. 2026-03-10T13:16:41.348 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:16:40 vm07 ceph-mon[52048]: osd.2 reported immediately failed by osd.1 2026-03-10T13:16:41.348 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:16:40 vm07 ceph-mon[52048]: osd.2 failed (root=default,host=vm07) (connection refused reported by osd.1) 2026-03-10T13:16:41.348 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:16:40 vm07 ceph-mon[52048]: osd.2 reported immediately failed by osd.1 2026-03-10T13:16:41.348 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:16:40 vm07 ceph-mon[52048]: osd.2 reported immediately failed by osd.1 2026-03-10T13:16:41.348 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:16:40 vm07 ceph-mon[52048]: osdmap e39: 3 total, 3 up, 3 in 2026-03-10T13:16:41.348 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:16:40 vm07 ceph-mon[52048]: mgrmap e16: no daemons active (since 1.16448s) 2026-03-10T13:16:41.348 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:16:40 vm07 ceph-mon[52048]: Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T13:16:41.348 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:16:40 vm07 ceph-mon[52048]: osdmap e40: 3 total, 2 up, 3 in 2026-03-10T13:16:44.107 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:16:43 vm07 ceph-mon[52048]: osdmap e41: 3 total, 2 up, 3 in 2026-03-10T13:17:10.478 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:17:06 vm07 systemd[1]: ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@osd.0.service: A process of this unit has been killed by the OOM killer. 2026-03-10T13:17:10.479 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:17:08 vm07 podman[98434]: 2026-03-10 13:17:08.650519002 +0000 UTC m=+30.048749726 container died a64617eb1fc82a64f65b3c7a8f4d232926c69548d8ed99b5fb36a899ef5aec9e (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) 2026-03-10T13:17:10.479 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:17:10 vm07 podman[98434]: 2026-03-10 13:17:10.429553709 +0000 UTC m=+31.827784433 container remove a64617eb1fc82a64f65b3c7a8f4d232926c69548d8ed99b5fb36a899ef5aec9e (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-10T13:17:11.598 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:17:11 vm07 systemd[1]: ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@osd.2.service: Main process exited, code=exited, status=137/n/a 2026-03-10T13:17:24.844 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:17:24 vm07 ceph-mon[52048]: osdmap e42: 3 total, 2 up, 3 in 2026-03-10T13:17:26.602 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:17:26 vm07 podman[98599]: 2026-03-10 13:17:26.404643845 +0000 UTC m=+2.452671660 container died 012f75cd89b424c169d1eab0f946179756ac06e657c0c0811dd7bfa6d0a1caf5 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-10T13:17:26.603 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 10 13:17:26 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-1[63991]: 2026-03-10T13:17:24.762+0000 7f4698517640 -1 osd.1 41 heartbeat_check: no reply from 192.168.123.107:6806 osd.0 since back 2026-03-10T13:17:15.931373+0000 front 2026-03-10T13:17:16.516743+0000 (oldest deadline 2026-03-10T13:17:22.276363+0000) 2026-03-10T13:17:27.593 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:17:27 vm07 podman[98599]: 2026-03-10 13:17:27.369525559 +0000 UTC m=+3.417553354 container remove 012f75cd89b424c169d1eab0f946179756ac06e657c0c0811dd7bfa6d0a1caf5 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-10T13:17:27.593 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:17:27 vm07 systemd[1]: ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@osd.0.service: Main process exited, code=exited, status=137/n/a 2026-03-10T13:17:28.101 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:17:27 vm07 ceph-mon[52048]: osd.0 reported failed by osd.1 2026-03-10T13:17:28.101 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:17:27 vm07 ceph-mon[52048]: osd.0 reported immediately failed by osd.1 2026-03-10T13:17:28.101 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:17:27 vm07 ceph-mon[52048]: osd.0 failed (root=default,host=vm07) (connection refused reported by osd.1) 2026-03-10T13:17:28.101 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:17:27 vm07 ceph-mon[52048]: osd.0 reported immediately failed by osd.1 2026-03-10T13:17:28.101 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:17:27 vm07 ceph-mon[52048]: osd.0 reported immediately failed by osd.1 2026-03-10T13:17:28.101 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:17:27 vm07 ceph-mon[52048]: osd.0 reported failed by osd.1 2026-03-10T13:17:28.101 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:17:27 vm07 ceph-mon[52048]: osd.0 reported immediately failed by osd.1 2026-03-10T13:17:28.101 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:17:27 vm07 ceph-mon[52048]: osd.0 reported immediately failed by osd.1 2026-03-10T13:17:28.101 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:17:27 vm07 ceph-mon[52048]: osd.0 reported immediately failed by osd.1 2026-03-10T13:17:28.101 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:17:27 vm07 ceph-mon[52048]: osd.0 reported immediately failed by osd.1 2026-03-10T13:17:28.101 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:17:27 vm07 ceph-mon[52048]: osd.0 reported immediately failed by osd.1 2026-03-10T13:17:28.101 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:17:27 vm07 ceph-mon[52048]: osd.0 reported immediately failed by osd.1 2026-03-10T13:17:29.090 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:17:28 vm07 ceph-mon[52048]: Health check update: 2 osds down (OSD_DOWN) 2026-03-10T13:17:29.090 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:17:28 vm07 ceph-mon[52048]: osdmap e43: 3 total, 1 up, 3 in 2026-03-10T13:17:29.090 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:17:28 vm07 podman[98708]: 2026-03-10 13:17:28.575086998 +0000 UTC m=+0.230552318 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T13:17:30.677 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:17:29 vm07 podman[98708]: 2026-03-10 13:17:29.693958325 +0000 UTC m=+1.349423646 container create 5c509d092c3db06d7c1664a0b1c57c7d98d29873ca80a79d8dfb53b92004adff (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223) 2026-03-10T13:17:32.097 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:17:31 vm07 ceph-mon[52048]: osdmap e44: 3 total, 1 up, 3 in 2026-03-10T13:17:33.350 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:17:32 vm07 podman[98708]: 2026-03-10 13:17:32.42781377 +0000 UTC m=+4.083279100 container init 5c509d092c3db06d7c1664a0b1c57c7d98d29873ca80a79d8dfb53b92004adff (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default) 2026-03-10T13:17:34.592 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:17:34 vm07 podman[98708]: 2026-03-10 13:17:34.275437867 +0000 UTC m=+5.930903197 container start 5c509d092c3db06d7c1664a0b1c57c7d98d29873ca80a79d8dfb53b92004adff (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True) 2026-03-10T13:17:34.592 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:17:34 vm07 bash[98708]: 5c509d092c3db06d7c1664a0b1c57c7d98d29873ca80a79d8dfb53b92004adff 2026-03-10T13:17:34.592 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:17:34 vm07 systemd[1]: Started Ceph mgr.a for bd98ed20-1c82-11f1-9239-ff903ae4ee6f. 2026-03-10T13:18:00.374 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:18:00 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[98771]: 2026-03-10T13:17:59.958+0000 7f79c8499140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T13:18:01.096 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:00 vm07 podman[101452]: 2026-03-10 13:18:00.960686491 +0000 UTC m=+0.170805713 container create 7dae7a7a898ed6bfd320759364517abb97533c4da69b9075c9d5f22c2f4c0780 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0-deactivate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-10T13:18:01.096 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:01 vm07 podman[101452]: 2026-03-10 13:18:00.900919397 +0000 UTC m=+0.111038639 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T13:18:01.096 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:00 vm07 podman[101451]: 2026-03-10 13:18:00.965059927 +0000 UTC m=+0.175246446 container create 63f978d96175761e1ffc7f88952785445197b6df6f0aaf39726bc637c3e55ac8 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2-deactivate, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True) 2026-03-10T13:18:01.096 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:00 vm07 podman[101451]: 2026-03-10 13:18:00.902843398 +0000 UTC m=+0.113029926 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T13:18:01.425 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:01 vm07 podman[101452]: 2026-03-10 13:18:01.333861077 +0000 UTC m=+0.543980299 container init 7dae7a7a898ed6bfd320759364517abb97533c4da69b9075c9d5f22c2f4c0780 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0-deactivate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T13:18:01.425 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:18:01 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[98771]: 2026-03-10T13:18:01.101+0000 7f79c8499140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T13:18:01.425 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:01 vm07 podman[101451]: 2026-03-10 13:18:01.302008444 +0000 UTC m=+0.512194962 container init 63f978d96175761e1ffc7f88952785445197b6df6f0aaf39726bc637c3e55ac8 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2-deactivate, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20260223, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T13:18:01.843 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:01 vm07 podman[101452]: 2026-03-10 13:18:01.408428486 +0000 UTC m=+0.618547718 container start 7dae7a7a898ed6bfd320759364517abb97533c4da69b9075c9d5f22c2f4c0780 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0-deactivate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default) 2026-03-10T13:18:01.843 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:01 vm07 podman[101452]: 2026-03-10 13:18:01.497804874 +0000 UTC m=+0.707924106 container attach 7dae7a7a898ed6bfd320759364517abb97533c4da69b9075c9d5f22c2f4c0780 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0-deactivate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS) 2026-03-10T13:18:01.843 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:01 vm07 podman[101451]: 2026-03-10 13:18:01.344436763 +0000 UTC m=+0.554623281 container start 63f978d96175761e1ffc7f88952785445197b6df6f0aaf39726bc637c3e55ac8 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2-deactivate, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True) 2026-03-10T13:18:01.844 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:01 vm07 podman[101451]: 2026-03-10 13:18:01.357366466 +0000 UTC m=+0.567552994 container attach 63f978d96175761e1ffc7f88952785445197b6df6f0aaf39726bc637c3e55ac8 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2-deactivate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2) 2026-03-10T13:18:06.620 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:06 vm07 conmon[101489]: conmon 7dae7a7a898ed6bfd320 : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7dae7a7a898ed6bfd320759364517abb97533c4da69b9075c9d5f22c2f4c0780.scope/memory.events 2026-03-10T13:18:06.620 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:06 vm07 podman[101452]: 2026-03-10 13:18:06.46470019 +0000 UTC m=+5.674819422 container died 7dae7a7a898ed6bfd320759364517abb97533c4da69b9075c9d5f22c2f4c0780 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0-deactivate, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid) 2026-03-10T13:18:06.620 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:06 vm07 conmon[101488]: conmon 63f978d96175761e1ffc : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-63f978d96175761e1ffc7f88952785445197b6df6f0aaf39726bc637c3e55ac8.scope/memory.events 2026-03-10T13:18:06.620 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:06 vm07 podman[101451]: 2026-03-10 13:18:06.42646504 +0000 UTC m=+5.636651558 container died 63f978d96175761e1ffc7f88952785445197b6df6f0aaf39726bc637c3e55ac8 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2-deactivate, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-10T13:18:06.620 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:06 vm07 podman[101451]: 2026-03-10 13:18:06.579106581 +0000 UTC m=+5.789293099 container remove 63f978d96175761e1ffc7f88952785445197b6df6f0aaf39726bc637c3e55ac8 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2-deactivate, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-10T13:18:06.620 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:06 vm07 systemd[1]: ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@osd.2.service: Failed with result 'exit-code'. 2026-03-10T13:18:06.620 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:06 vm07 systemd[1]: ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@osd.2.service: Consumed 12.861s CPU time. 2026-03-10T13:18:06.907 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:06 vm07 podman[101452]: 2026-03-10 13:18:06.620678197 +0000 UTC m=+5.830797429 container remove 7dae7a7a898ed6bfd320759364517abb97533c4da69b9075c9d5f22c2f4c0780 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0-deactivate, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T13:18:06.907 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:06 vm07 systemd[1]: ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@osd.0.service: Failed with result 'exit-code'. 2026-03-10T13:18:06.907 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:06 vm07 systemd[1]: ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@osd.0.service: Consumed 12.551s CPU time. 2026-03-10T13:18:07.177 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:18:06 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[98771]: 2026-03-10T13:18:06.907+0000 7f79c8499140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T13:18:07.835 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:18:07 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[98771]: 2026-03-10T13:18:07.825+0000 7f79c8499140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T13:18:08.349 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:18:08 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[98771]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T13:18:08.349 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:18:08 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[98771]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T13:18:08.349 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:18:08 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[98771]: from numpy import show_config as show_numpy_config 2026-03-10T13:18:08.349 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:18:08 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[98771]: 2026-03-10T13:18:08.106+0000 7f79c8499140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T13:18:08.349 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:18:08 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[98771]: 2026-03-10T13:18:08.183+0000 7f79c8499140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T13:18:08.349 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:18:08 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[98771]: 2026-03-10T13:18:08.337+0000 7f79c8499140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T13:18:10.106 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:18:10 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[98771]: 2026-03-10T13:18:10.075+0000 7f79c8499140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T13:18:10.507 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:18:10 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[98771]: 2026-03-10T13:18:10.295+0000 7f79c8499140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T13:18:10.757 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:18:10 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[98771]: 2026-03-10T13:18:10.566+0000 7f79c8499140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T13:18:10.757 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:18:10 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[98771]: 2026-03-10T13:18:10.670+0000 7f79c8499140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T13:18:11.091 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:18:10 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[98771]: 2026-03-10T13:18:10.805+0000 7f79c8499140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T13:18:11.091 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:18:10 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[98771]: 2026-03-10T13:18:10.941+0000 7f79c8499140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T13:18:11.841 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:18:11 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[98771]: 2026-03-10T13:18:11.484+0000 7f79c8499140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T13:18:11.841 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:18:11 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[98771]: 2026-03-10T13:18:11.643+0000 7f79c8499140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T13:18:12.840 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:18:12 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[98771]: 2026-03-10T13:18:12.568+0000 7f79c8499140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T13:18:13.685 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:18:13 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[98771]: 2026-03-10T13:18:13.367+0000 7f79c8499140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T13:18:13.686 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:18:13 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[98771]: 2026-03-10T13:18:13.524+0000 7f79c8499140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T13:18:13.686 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:18:13 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[98771]: 2026-03-10T13:18:13.592+0000 7f79c8499140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T13:18:14.061 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:18:13 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[98771]: 2026-03-10T13:18:13.686+0000 7f79c8499140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T13:18:14.061 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:18:13 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[98771]: 2026-03-10T13:18:13.797+0000 7f79c8499140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T13:18:14.340 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:18:14 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[98771]: 2026-03-10T13:18:14.062+0000 7f79c8499140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T13:18:14.341 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:18:14 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[98771]: 2026-03-10T13:18:14.296+0000 7f79c8499140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T13:18:14.989 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:14 vm07 ceph-mon[52048]: Activating manager daemon a 2026-03-10T13:18:14.989 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:18:14 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[98771]: 2026-03-10T13:18:14.607+0000 7f79c8499140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T13:18:14.989 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:18:14 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a[98771]: 2026-03-10T13:18:14.700+0000 7f79c8499140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T13:18:16.084 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:15 vm07 ceph-mon[52048]: mgrmap e17: a(active, starting, since 0.0567186s) 2026-03-10T13:18:16.091 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:15 vm07 ceph-mon[52048]: from='mgr.14294 192.168.123.107:0/2168801528' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T13:18:16.091 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:15 vm07 ceph-mon[52048]: from='mgr.14294 192.168.123.107:0/2168801528' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T13:18:16.091 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:15 vm07 ceph-mon[52048]: from='mgr.14294 192.168.123.107:0/2168801528' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T13:18:16.091 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:15 vm07 ceph-mon[52048]: from='mgr.14294 192.168.123.107:0/2168801528' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:18:16.091 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:15 vm07 ceph-mon[52048]: from='mgr.14294 192.168.123.107:0/2168801528' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T13:18:16.091 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:15 vm07 ceph-mon[52048]: Manager daemon a is now available 2026-03-10T13:18:16.091 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:15 vm07 ceph-mon[52048]: from='mgr.14294 192.168.123.107:0/2168801528' entity='mgr.a' 2026-03-10T13:18:16.091 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:15 vm07 ceph-mon[52048]: from='mgr.14294 192.168.123.107:0/2168801528' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:18:16.091 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:15 vm07 ceph-mon[52048]: from='mgr.14294 192.168.123.107:0/2168801528' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T13:18:16.638 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:16 vm07 systemd[1]: ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@osd.0.service: Scheduled restart job, restart counter is at 1. 2026-03-10T13:18:16.638 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:16 vm07 systemd[1]: Stopped Ceph osd.0 for bd98ed20-1c82-11f1-9239-ff903ae4ee6f. 2026-03-10T13:18:16.638 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:16 vm07 systemd[1]: ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@osd.0.service: Consumed 12.551s CPU time. 2026-03-10T13:18:16.638 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:16 vm07 systemd[1]: ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@osd.2.service: Scheduled restart job, restart counter is at 1. 2026-03-10T13:18:16.964 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:16 vm07 ceph-mon[52048]: mgrmap e18: a(active, since 1.08636s) 2026-03-10T13:18:16.964 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:16 vm07 systemd[1]: Starting Ceph osd.0 for bd98ed20-1c82-11f1-9239-ff903ae4ee6f... 2026-03-10T13:18:16.964 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:16 vm07 systemd[1]: Stopped Ceph osd.2 for bd98ed20-1c82-11f1-9239-ff903ae4ee6f. 2026-03-10T13:18:16.964 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:16 vm07 systemd[1]: ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@osd.2.service: Consumed 12.861s CPU time. 2026-03-10T13:18:16.964 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:16 vm07 systemd[1]: Starting Ceph osd.2 for bd98ed20-1c82-11f1-9239-ff903ae4ee6f... 2026-03-10T13:18:17.613 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:17 vm07 podman[109422]: 2026-03-10 13:18:17.423386329 +0000 UTC m=+0.102726589 container create 8ae88ba4484bb3ae443cec9eab715f0a4f31f53b49eae0c362804610bf5e1484 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0-activate, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-10T13:18:17.614 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:17 vm07 podman[109422]: 2026-03-10 13:18:17.373053584 +0000 UTC m=+0.052393834 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T13:18:17.614 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:17 vm07 podman[109388]: 2026-03-10 13:18:17.417211491 +0000 UTC m=+0.147729177 container create 9bd949fea90bcf636cfcb90287f7b77347dc3f3a8d37679aa0e754eed13b0a24 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2-activate, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-10T13:18:17.614 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:17 vm07 podman[109388]: 2026-03-10 13:18:17.374945415 +0000 UTC m=+0.105463101 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T13:18:17.988 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:17 vm07 ceph-mon[52048]: pgmap v2: 33 pgs: 29 undersized+peered, 4 undersized+degraded+peered; 449 KiB data, 27 MiB used, 20 GiB / 20 GiB avail; 10/15 objects degraded (66.667%) 2026-03-10T13:18:17.988 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:17 vm07 ceph-mon[52048]: pgmap v3: 33 pgs: 29 undersized+peered, 4 undersized+degraded+peered; 449 KiB data, 27 MiB used, 20 GiB / 20 GiB avail; 10/15 objects degraded (66.667%) 2026-03-10T13:18:17.988 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:17 vm07 ceph-mon[52048]: Health check failed: Slow OSD heartbeats on back (longest 10836.706ms) (OSD_SLOW_PING_TIME_BACK) 2026-03-10T13:18:17.988 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:17 vm07 ceph-mon[52048]: Health check failed: Slow OSD heartbeats on front (longest 13338.437ms) (OSD_SLOW_PING_TIME_FRONT) 2026-03-10T13:18:17.988 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:17 vm07 ceph-mon[52048]: Health check failed: Reduced data availability: 14 pgs inactive (PG_AVAILABILITY) 2026-03-10T13:18:17.988 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:17 vm07 ceph-mon[52048]: Health check failed: Degraded data redundancy: 10/15 objects degraded (66.667%), 4 pgs degraded (PG_DEGRADED) 2026-03-10T13:18:17.988 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:17 vm07 ceph-mon[52048]: mgrmap e19: a(active, since 2s) 2026-03-10T13:18:17.988 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:17 vm07 ceph-mon[52048]: from='mgr.14294 192.168.123.107:0/2168801528' entity='mgr.a' 2026-03-10T13:18:17.988 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:17 vm07 ceph-mon[52048]: from='mgr.14294 192.168.123.107:0/2168801528' entity='mgr.a' 2026-03-10T13:18:17.988 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:17 vm07 ceph-mon[52048]: [10/Mar/2026:13:18:17] ENGINE Bus STARTING 2026-03-10T13:18:17.988 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:17 vm07 ceph-mon[52048]: [10/Mar/2026:13:18:17] ENGINE Serving on https://192.168.123.107:7150 2026-03-10T13:18:17.988 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:17 vm07 ceph-mon[52048]: [10/Mar/2026:13:18:17] ENGINE Client ('192.168.123.107', 38572) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T13:18:17.988 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:17 vm07 ceph-mon[52048]: [10/Mar/2026:13:18:17] ENGINE Serving on http://192.168.123.107:8765 2026-03-10T13:18:17.988 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:17 vm07 ceph-mon[52048]: [10/Mar/2026:13:18:17] ENGINE Bus STARTED 2026-03-10T13:18:17.988 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:17 vm07 ceph-mon[52048]: from='mgr.14294 192.168.123.107:0/2168801528' entity='mgr.a' 2026-03-10T13:18:17.988 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:17 vm07 ceph-mon[52048]: from='mgr.14294 192.168.123.107:0/2168801528' entity='mgr.a' 2026-03-10T13:18:17.988 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:17 vm07 ceph-mon[52048]: from='mgr.14294 192.168.123.107:0/2168801528' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:18:17.989 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:17 vm07 ceph-mon[52048]: from='mgr.14294 192.168.123.107:0/2168801528' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:18:17.989 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:17 vm07 ceph-mon[52048]: from='mgr.14294 192.168.123.107:0/2168801528' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:18:17.989 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:17 vm07 podman[109422]: 2026-03-10 13:18:17.636817589 +0000 UTC m=+0.316157840 container init 8ae88ba4484bb3ae443cec9eab715f0a4f31f53b49eae0c362804610bf5e1484 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0-activate, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True) 2026-03-10T13:18:17.989 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:17 vm07 podman[109422]: 2026-03-10 13:18:17.685360073 +0000 UTC m=+0.364700333 container start 8ae88ba4484bb3ae443cec9eab715f0a4f31f53b49eae0c362804610bf5e1484 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0-activate, CEPH_REF=squid, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS) 2026-03-10T13:18:17.989 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:17 vm07 podman[109422]: 2026-03-10 13:18:17.70151821 +0000 UTC m=+0.380858470 container attach 8ae88ba4484bb3ae443cec9eab715f0a4f31f53b49eae0c362804610bf5e1484 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0-activate, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-10T13:18:17.989 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:17 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0-activate[109649]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T13:18:17.989 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:17 vm07 bash[109422]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T13:18:17.989 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:17 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0-activate[109649]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T13:18:17.989 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:17 vm07 bash[109422]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T13:18:17.989 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:17 vm07 podman[109388]: 2026-03-10 13:18:17.758877607 +0000 UTC m=+0.489395293 container init 9bd949fea90bcf636cfcb90287f7b77347dc3f3a8d37679aa0e754eed13b0a24 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2-activate, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-10T13:18:17.989 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:17 vm07 podman[109388]: 2026-03-10 13:18:17.794123933 +0000 UTC m=+0.524641620 container start 9bd949fea90bcf636cfcb90287f7b77347dc3f3a8d37679aa0e754eed13b0a24 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2-activate, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-10T13:18:17.989 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:17 vm07 podman[109388]: 2026-03-10 13:18:17.805178846 +0000 UTC m=+0.535696532 container attach 9bd949fea90bcf636cfcb90287f7b77347dc3f3a8d37679aa0e754eed13b0a24 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-10T13:18:18.343 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:18 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2-activate[109656]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T13:18:18.343 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:18 vm07 bash[109388]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T13:18:18.343 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:18 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2-activate[109656]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T13:18:18.343 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:18 vm07 bash[109388]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T13:18:19.342 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:18 vm07 ceph-mon[52048]: Updating vm07:/etc/ceph/ceph.conf 2026-03-10T13:18:19.342 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:18 vm07 ceph-mon[52048]: Updating vm07:/var/lib/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/config/ceph.conf 2026-03-10T13:18:19.342 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:18 vm07 ceph-mon[52048]: Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:18:19.342 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:18 vm07 ceph-mon[52048]: Updating vm07:/var/lib/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/config/ceph.client.admin.keyring 2026-03-10T13:18:19.342 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:18 vm07 ceph-mon[52048]: pgmap v4: 33 pgs: 29 undersized+peered, 4 undersized+degraded+peered; 449 KiB data, 27 MiB used, 20 GiB / 20 GiB avail; 10/15 objects degraded (66.667%) 2026-03-10T13:18:19.841 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:19 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0-activate[109649]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-10T13:18:19.841 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:19 vm07 bash[109422]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-10T13:18:19.841 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:19 vm07 bash[109422]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T13:18:19.841 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:19 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0-activate[109649]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T13:18:19.841 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:19 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0-activate[109649]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T13:18:19.841 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:19 vm07 bash[109422]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T13:18:19.841 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:19 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0-activate[109649]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 2026-03-10T13:18:19.841 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:19 vm07 bash[109422]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 2026-03-10T13:18:19.841 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:19 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0-activate[109649]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-02f4154d-34fe-49ea-b3a2-89b1088cbb04/osd-block-62c85112-9da4-4845-b7c8-809946f80c39 --path /var/lib/ceph/osd/ceph-0 --no-mon-config 2026-03-10T13:18:19.841 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:19 vm07 bash[109422]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-02f4154d-34fe-49ea-b3a2-89b1088cbb04/osd-block-62c85112-9da4-4845-b7c8-809946f80c39 --path /var/lib/ceph/osd/ceph-0 --no-mon-config 2026-03-10T13:18:19.841 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:19 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2-activate[109656]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-10T13:18:19.841 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:19 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2-activate[109656]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T13:18:19.841 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:19 vm07 bash[109388]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-10T13:18:19.841 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:19 vm07 bash[109388]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T13:18:19.841 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:19 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2-activate[109656]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T13:18:19.841 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:19 vm07 bash[109388]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T13:18:19.841 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:19 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2-activate[109656]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 2026-03-10T13:18:19.841 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:19 vm07 bash[109388]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 2026-03-10T13:18:19.841 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:19 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2-activate[109656]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-5c96b81b-4f9f-4fc3-8f4b-7bf5320ff396/osd-block-d07c8bd9-87b1-4074-add0-71507e9620df --path /var/lib/ceph/osd/ceph-2 --no-mon-config 2026-03-10T13:18:19.841 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:19 vm07 bash[109388]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-5c96b81b-4f9f-4fc3-8f4b-7bf5320ff396/osd-block-d07c8bd9-87b1-4074-add0-71507e9620df --path /var/lib/ceph/osd/ceph-2 --no-mon-config 2026-03-10T13:18:20.232 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:20 vm07 ceph-mon[52048]: from='mgr.14294 192.168.123.107:0/2168801528' entity='mgr.a' 2026-03-10T13:18:20.232 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:20 vm07 ceph-mon[52048]: from='mgr.14294 192.168.123.107:0/2168801528' entity='mgr.a' 2026-03-10T13:18:20.232 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:20 vm07 ceph-mon[52048]: pgmap v5: 33 pgs: 29 undersized+peered, 4 undersized+degraded+peered; 449 KiB data, 27 MiB used, 20 GiB / 20 GiB avail; 10/15 objects degraded (66.667%) 2026-03-10T13:18:20.232 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:20 vm07 ceph-mon[52048]: from='mgr.14294 192.168.123.107:0/2168801528' entity='mgr.a' 2026-03-10T13:18:20.232 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:20 vm07 ceph-mon[52048]: from='mgr.14294 192.168.123.107:0/2168801528' entity='mgr.a' 2026-03-10T13:18:20.232 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:20 vm07 ceph-mon[52048]: from='mgr.14294 192.168.123.107:0/2168801528' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:18:20.232 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:20 vm07 ceph-mon[52048]: from='mgr.14294 192.168.123.107:0/2168801528' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:18:20.232 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:20 vm07 ceph-mon[52048]: from='mgr.14294 192.168.123.107:0/2168801528' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:18:20.232 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:20 vm07 ceph-mon[52048]: pgmap v6: 33 pgs: 29 undersized+peered, 4 undersized+degraded+peered; 449 KiB data, 27 MiB used, 20 GiB / 20 GiB avail; 10/15 objects degraded (66.667%) 2026-03-10T13:18:20.232 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:20 vm07 ceph-mon[52048]: pgmap v7: 33 pgs: 29 undersized+peered, 4 undersized+degraded+peered; 449 KiB data, 27 MiB used, 20 GiB / 20 GiB avail; 10/15 objects degraded (66.667%) 2026-03-10T13:18:20.232 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:20 vm07 ceph-mon[52048]: pgmap v8: 33 pgs: 29 undersized+peered, 4 undersized+degraded+peered; 449 KiB data, 27 MiB used, 20 GiB / 20 GiB avail; 10/15 objects degraded (66.667%) 2026-03-10T13:18:20.232 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:20 vm07 ceph-mon[52048]: pgmap v9: 33 pgs: 29 undersized+peered, 4 undersized+degraded+peered; 449 KiB data, 27 MiB used, 20 GiB / 20 GiB avail; 10/15 objects degraded (66.667%) 2026-03-10T13:18:20.232 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:20 vm07 ceph-mon[52048]: from='mgr.14294 192.168.123.107:0/2168801528' entity='mgr.a' 2026-03-10T13:18:20.233 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:19 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0-activate[109649]: Running command: /usr/bin/ln -snf /dev/ceph-02f4154d-34fe-49ea-b3a2-89b1088cbb04/osd-block-62c85112-9da4-4845-b7c8-809946f80c39 /var/lib/ceph/osd/ceph-0/block 2026-03-10T13:18:20.233 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:19 vm07 bash[109422]: Running command: /usr/bin/ln -snf /dev/ceph-02f4154d-34fe-49ea-b3a2-89b1088cbb04/osd-block-62c85112-9da4-4845-b7c8-809946f80c39 /var/lib/ceph/osd/ceph-0/block 2026-03-10T13:18:20.233 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:19 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0-activate[109649]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block 2026-03-10T13:18:20.233 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:19 vm07 bash[109422]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block 2026-03-10T13:18:20.233 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:19 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0-activate[109649]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 2026-03-10T13:18:20.233 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:19 vm07 bash[109422]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 2026-03-10T13:18:20.233 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:19 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0-activate[109649]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 2026-03-10T13:18:20.233 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:19 vm07 bash[109422]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 2026-03-10T13:18:20.233 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:19 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0-activate[109649]: --> ceph-volume lvm activate successful for osd ID: 0 2026-03-10T13:18:20.233 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:19 vm07 bash[109422]: --> ceph-volume lvm activate successful for osd ID: 0 2026-03-10T13:18:20.233 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:20 vm07 conmon[109649]: conmon 8ae88ba4484bb3ae443c : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8ae88ba4484bb3ae443cec9eab715f0a4f31f53b49eae0c362804610bf5e1484.scope/memory.events 2026-03-10T13:18:20.233 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:20 vm07 podman[109422]: 2026-03-10 13:18:20.052199844 +0000 UTC m=+2.731540104 container died 8ae88ba4484bb3ae443cec9eab715f0a4f31f53b49eae0c362804610bf5e1484 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0-activate, CEPH_REF=squid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-10T13:18:20.233 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:19 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2-activate[109656]: Running command: /usr/bin/ln -snf /dev/ceph-5c96b81b-4f9f-4fc3-8f4b-7bf5320ff396/osd-block-d07c8bd9-87b1-4074-add0-71507e9620df /var/lib/ceph/osd/ceph-2/block 2026-03-10T13:18:20.233 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:19 vm07 bash[109388]: Running command: /usr/bin/ln -snf /dev/ceph-5c96b81b-4f9f-4fc3-8f4b-7bf5320ff396/osd-block-d07c8bd9-87b1-4074-add0-71507e9620df /var/lib/ceph/osd/ceph-2/block 2026-03-10T13:18:20.233 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:20 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2-activate[109656]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block 2026-03-10T13:18:20.233 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:20 vm07 bash[109388]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block 2026-03-10T13:18:20.233 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:20 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2-activate[109656]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2 2026-03-10T13:18:20.233 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:20 vm07 bash[109388]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2 2026-03-10T13:18:20.233 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:20 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2-activate[109656]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 2026-03-10T13:18:20.233 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:20 vm07 bash[109388]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 2026-03-10T13:18:20.233 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:20 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2-activate[109656]: --> ceph-volume lvm activate successful for osd ID: 2 2026-03-10T13:18:20.233 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:20 vm07 bash[109388]: --> ceph-volume lvm activate successful for osd ID: 2 2026-03-10T13:18:20.233 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:20 vm07 conmon[109656]: conmon 9bd949fea90bcf636cfc : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9bd949fea90bcf636cfcb90287f7b77347dc3f3a8d37679aa0e754eed13b0a24.scope/memory.events 2026-03-10T13:18:20.233 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:20 vm07 podman[109388]: 2026-03-10 13:18:20.078714115 +0000 UTC m=+2.809231801 container died 9bd949fea90bcf636cfcb90287f7b77347dc3f3a8d37679aa0e754eed13b0a24 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223) 2026-03-10T13:18:20.565 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:20 vm07 podman[109422]: 2026-03-10 13:18:20.249545585 +0000 UTC m=+2.928885845 container remove 8ae88ba4484bb3ae443cec9eab715f0a4f31f53b49eae0c362804610bf5e1484 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0-activate, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default) 2026-03-10T13:18:20.565 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:20 vm07 podman[109388]: 2026-03-10 13:18:20.262667537 +0000 UTC m=+2.993185223 container remove 9bd949fea90bcf636cfcb90287f7b77347dc3f3a8d37679aa0e754eed13b0a24 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=squid) 2026-03-10T13:18:20.826 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:20 vm07 podman[112753]: 2026-03-10 13:18:20.626242906 +0000 UTC m=+0.102799898 container create 8afe39f7f5f1b0578f22db6f0f4231bcfcf5a42a90fdf02449e5faaf08ee5b65 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3) 2026-03-10T13:18:20.827 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:20 vm07 podman[112753]: 2026-03-10 13:18:20.591903728 +0000 UTC m=+0.068460729 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T13:18:20.827 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:20 vm07 podman[112753]: 2026-03-10 13:18:20.727131523 +0000 UTC m=+0.203688515 container init 8afe39f7f5f1b0578f22db6f0f4231bcfcf5a42a90fdf02449e5faaf08ee5b65 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-10T13:18:20.827 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:20 vm07 podman[112753]: 2026-03-10 13:18:20.788683899 +0000 UTC m=+0.265240891 container start 8afe39f7f5f1b0578f22db6f0f4231bcfcf5a42a90fdf02449e5faaf08ee5b65 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T13:18:20.827 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:20 vm07 bash[112753]: 8afe39f7f5f1b0578f22db6f0f4231bcfcf5a42a90fdf02449e5faaf08ee5b65 2026-03-10T13:18:20.827 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:20 vm07 podman[112732]: 2026-03-10 13:18:20.566237283 +0000 UTC m=+0.062102767 container create 51764ee4f9ce8f7df8ff67508699181cfc69f777237f7be947d3c1ff2dc8b276 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-10T13:18:20.827 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:20 vm07 podman[112732]: 2026-03-10 13:18:20.542518917 +0000 UTC m=+0.038384411 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T13:18:20.827 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:20 vm07 podman[112732]: 2026-03-10 13:18:20.72098135 +0000 UTC m=+0.216846835 container init 51764ee4f9ce8f7df8ff67508699181cfc69f777237f7be947d3c1ff2dc8b276 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T13:18:20.827 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:20 vm07 podman[112732]: 2026-03-10 13:18:20.790595668 +0000 UTC m=+0.286461142 container start 51764ee4f9ce8f7df8ff67508699181cfc69f777237f7be947d3c1ff2dc8b276 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS) 2026-03-10T13:18:20.827 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:20 vm07 bash[112732]: 51764ee4f9ce8f7df8ff67508699181cfc69f777237f7be947d3c1ff2dc8b276 2026-03-10T13:18:21.090 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:21 vm07 ceph-mon[52048]: Health check failed: 1 Cephadm Agent(s) are not reporting. Hosts may be offline (CEPHADM_AGENT_DOWN) 2026-03-10T13:18:21.091 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:21 vm07 ceph-mon[52048]: Health check failed: 3 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON) 2026-03-10T13:18:21.091 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:20 vm07 systemd[1]: Started Ceph osd.0 for bd98ed20-1c82-11f1-9239-ff903ae4ee6f. 2026-03-10T13:18:21.091 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:20 vm07 systemd[1]: Started Ceph osd.2 for bd98ed20-1c82-11f1-9239-ff903ae4ee6f. 2026-03-10T13:18:21.840 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:21 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0[112899]: 2026-03-10T13:18:21.390+0000 7ff733f65740 -1 Falling back to public interface 2026-03-10T13:18:21.841 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:21 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2[112873]: 2026-03-10T13:18:21.384+0000 7fcdc4743740 -1 Falling back to public interface 2026-03-10T13:18:22.340 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:22 vm07 ceph-mon[52048]: pgmap v10: 33 pgs: 29 undersized+peered, 4 undersized+degraded+peered; 449 KiB data, 27 MiB used, 20 GiB / 20 GiB avail; 10/15 objects degraded (66.667%) 2026-03-10T13:18:22.341 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:22 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0[112899]: 2026-03-10T13:18:22.040+0000 7ff733f65740 -1 osd.0 41 log_to_monitors true 2026-03-10T13:18:23.090 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:23 vm07 ceph-mon[52048]: from='osd.0 [v2:192.168.123.107:6813/2104316467,v1:192.168.123.107:6818/2104316467]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T13:18:23.091 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:23 vm07 ceph-mon[52048]: from='osd.2 [v2:192.168.123.107:6804/1596717882,v1:192.168.123.107:6805/1596717882]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T13:18:23.091 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:23 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0[112899]: 2026-03-10T13:18:23.048+0000 7ff72b50f640 -1 osd.0 41 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T13:18:23.091 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:22 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2[112873]: 2026-03-10T13:18:22.813+0000 7fcdc4743740 -1 osd.2 38 log_to_monitors true 2026-03-10T13:18:23.091 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:23 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2[112873]: 2026-03-10T13:18:23.055+0000 7fcdbbced640 -1 osd.2 38 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T13:18:23.667 INFO:tasks.workunit.client.0.vm07.stderr:+ for i in ${ISCSI_CONT_IDS} 2026-03-10T13:18:23.667 INFO:tasks.workunit.client.0.vm07.stderr:++ sudo podman exec 24dec31e5ee6 /bin/sh -c 'ps -ef | grep -c sleep' 2026-03-10T13:18:23.707 INFO:tasks.workunit.client.0.vm07.stderr:Error: no container with name or ID "24dec31e5ee6" found: no such container 2026-03-10T13:18:23.716 INFO:tasks.workunit.client.0.vm07.stderr:+ SLEEP_COUNT= 2026-03-10T13:18:23.716 DEBUG:teuthology.orchestra.run:got remote process result: 125 2026-03-10T13:18:23.716 INFO:tasks.workunit:Stopping ['cephadm/test_iscsi_pids_limit.sh', 'cephadm/test_iscsi_etc_hosts.sh', 'cephadm/test_iscsi_setup.sh'] on client.0... 2026-03-10T13:18:23.717 DEBUG:teuthology.orchestra.run.vm07:> sudo rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/clone.client.0 2026-03-10T13:18:24.198 ERROR:teuthology.run_tasks:Saw exception from tasks. Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 105, in run_tasks manager = run_one_task(taskname, ctx=ctx, config=config) File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 83, in run_one_task return task(**kwargs) File "/home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks/workunit.py", line 125, in task with parallel() as p: File "/home/teuthos/teuthology/teuthology/parallel.py", line 84, in __exit__ for result in self: File "/home/teuthos/teuthology/teuthology/parallel.py", line 98, in __next__ resurrect_traceback(result) File "/home/teuthos/teuthology/teuthology/parallel.py", line 30, in resurrect_traceback raise exc.exc_info[1] File "/home/teuthos/teuthology/teuthology/parallel.py", line 23, in capture_traceback return func(*args, **kwargs) File "/home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks/workunit.py", line 433, in _run_tests remote.run( File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed (workunit test cephadm/test_iscsi_pids_limit.sh) on vm07 with status 125: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_iscsi_pids_limit.sh' 2026-03-10T13:18:24.199 DEBUG:teuthology.run_tasks:Unwinding manager cephadm 2026-03-10T13:18:24.201 INFO:tasks.cephadm:Teardown begin 2026-03-10T13:18:24.202 DEBUG:teuthology.orchestra.run.vm07:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T13:18:24.236 INFO:tasks.cephadm:Disabling cephadm mgr module 2026-03-10T13:18:24.236 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid bd98ed20-1c82-11f1-9239-ff903ae4ee6f -- ceph mgr module disable cephadm 2026-03-10T13:18:24.341 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:24 vm07 ceph-mon[52048]: from='osd.0 [v2:192.168.123.107:6813/2104316467,v1:192.168.123.107:6818/2104316467]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T13:18:24.341 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:24 vm07 ceph-mon[52048]: from='osd.2 [v2:192.168.123.107:6804/1596717882,v1:192.168.123.107:6805/1596717882]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T13:18:24.342 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:24 vm07 ceph-mon[52048]: osdmap e45: 3 total, 1 up, 3 in 2026-03-10T13:18:24.342 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:24 vm07 ceph-mon[52048]: from='osd.0 [v2:192.168.123.107:6813/2104316467,v1:192.168.123.107:6818/2104316467]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T13:18:24.342 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:24 vm07 ceph-mon[52048]: from='osd.2 [v2:192.168.123.107:6804/1596717882,v1:192.168.123.107:6805/1596717882]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T13:18:24.342 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:24 vm07 ceph-mon[52048]: pgmap v12: 33 pgs: 29 undersized+peered, 4 undersized+degraded+peered; 449 KiB data, 82 MiB used, 60 GiB / 60 GiB avail; 10/15 objects degraded (66.667%) 2026-03-10T13:18:24.490 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/mon.a/config 2026-03-10T13:18:24.508 INFO:teuthology.orchestra.run.vm07.stderr:Error: statfs /etc/ceph/ceph.client.admin.keyring: no such file or directory 2026-03-10T13:18:24.528 DEBUG:teuthology.orchestra.run:got remote process result: 125 2026-03-10T13:18:24.528 INFO:tasks.cephadm:Cleaning up testdir ceph.* files... 2026-03-10T13:18:24.528 DEBUG:teuthology.orchestra.run.vm07:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-10T13:18:24.550 INFO:tasks.cephadm:Stopping all daemons... 2026-03-10T13:18:24.550 INFO:tasks.cephadm.mon.a:Stopping mon.a... 2026-03-10T13:18:24.550 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@mon.a 2026-03-10T13:18:24.761 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:24 vm07 systemd[1]: Stopping Ceph mon.a for bd98ed20-1c82-11f1-9239-ff903ae4ee6f... 2026-03-10T13:18:25.016 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:24 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mon-a[52022]: 2026-03-10T13:18:24.798+0000 7f05f4ea8640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T13:18:25.016 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:24 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mon-a[52022]: 2026-03-10T13:18:24.798+0000 7f05f4ea8640 -1 mon.a@0(leader) e1 *** Got Signal Terminated *** 2026-03-10T13:18:25.016 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:24 vm07 podman[118517]: 2026-03-10 13:18:24.848976955 +0000 UTC m=+0.087212690 container died ac917e44bc18e080c0ad065518ab082b8e9735d392976d8d37b1e29d7aee2fef (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mon-a, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3) 2026-03-10T13:18:25.016 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:24 vm07 podman[118517]: 2026-03-10 13:18:24.988059807 +0000 UTC m=+0.226295542 container remove ac917e44bc18e080c0ad065518ab082b8e9735d392976d8d37b1e29d7aee2fef (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mon-a, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-10T13:18:25.016 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:24 vm07 bash[118517]: ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mon-a 2026-03-10T13:18:25.141 DEBUG:teuthology.orchestra.run.vm07:> sudo pkill -f 'journalctl -f -n 0 -u ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@mon.a.service' 2026-03-10T13:18:25.592 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:25 vm07 systemd[1]: ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@mon.a.service: Deactivated successfully. 2026-03-10T13:18:25.592 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:25 vm07 systemd[1]: Stopped Ceph mon.a for bd98ed20-1c82-11f1-9239-ff903ae4ee6f. 2026-03-10T13:18:25.592 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 10 13:18:25 vm07 systemd[1]: ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@mon.a.service: Consumed 13.402s CPU time. 2026-03-10T13:18:25.735 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T13:18:25.735 INFO:tasks.cephadm.mon.a:Stopped mon.a 2026-03-10T13:18:25.735 INFO:tasks.cephadm.mgr.a:Stopping mgr.a... 2026-03-10T13:18:25.735 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@mgr.a 2026-03-10T13:18:25.995 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:18:25 vm07 systemd[1]: Stopping Ceph mgr.a for bd98ed20-1c82-11f1-9239-ff903ae4ee6f... 2026-03-10T13:18:26.242 DEBUG:teuthology.orchestra.run.vm07:> sudo pkill -f 'journalctl -f -n 0 -u ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@mgr.a.service' 2026-03-10T13:18:26.272 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:18:26 vm07 podman[118848]: 2026-03-10 13:18:26.008473464 +0000 UTC m=+0.085305871 container died 5c509d092c3db06d7c1664a0b1c57c7d98d29873ca80a79d8dfb53b92004adff (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-10T13:18:26.272 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:18:26 vm07 podman[118848]: 2026-03-10 13:18:26.145436278 +0000 UTC m=+0.222268695 container remove 5c509d092c3db06d7c1664a0b1c57c7d98d29873ca80a79d8dfb53b92004adff (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20260223, CEPH_REF=squid) 2026-03-10T13:18:26.272 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:18:26 vm07 bash[118848]: ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-mgr-a 2026-03-10T13:18:26.272 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:18:26 vm07 systemd[1]: ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@mgr.a.service: Deactivated successfully. 2026-03-10T13:18:26.272 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:18:26 vm07 systemd[1]: Stopped Ceph mgr.a for bd98ed20-1c82-11f1-9239-ff903ae4ee6f. 2026-03-10T13:18:26.272 INFO:journalctl@ceph.mgr.a.vm07.stdout:Mar 10 13:18:26 vm07 systemd[1]: ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@mgr.a.service: Consumed 11.400s CPU time. 2026-03-10T13:18:26.720 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T13:18:26.720 INFO:tasks.cephadm.mgr.a:Stopped mgr.a 2026-03-10T13:18:26.720 INFO:tasks.cephadm.osd.0:Stopping osd.0... 2026-03-10T13:18:26.720 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@osd.0 2026-03-10T13:18:27.091 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:26 vm07 systemd[1]: Stopping Ceph osd.0 for bd98ed20-1c82-11f1-9239-ff903ae4ee6f... 2026-03-10T13:18:27.091 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:26 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0[112899]: 2026-03-10T13:18:26.911+0000 7ff730efa640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T13:18:27.091 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:26 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0[112899]: 2026-03-10T13:18:26.911+0000 7ff730efa640 -1 osd.0 46 *** Got signal Terminated *** 2026-03-10T13:18:27.091 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:26 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0[112899]: 2026-03-10T13:18:26.911+0000 7ff730efa640 -1 osd.0 46 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T13:18:32.282 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:31 vm07 podman[118978]: 2026-03-10 13:18:31.932632743 +0000 UTC m=+5.047379993 container died 8afe39f7f5f1b0578f22db6f0f4231bcfcf5a42a90fdf02449e5faaf08ee5b65 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3) 2026-03-10T13:18:32.282 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:32 vm07 podman[118978]: 2026-03-10 13:18:32.068810226 +0000 UTC m=+5.183557476 container remove 8afe39f7f5f1b0578f22db6f0f4231bcfcf5a42a90fdf02449e5faaf08ee5b65 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0) 2026-03-10T13:18:32.282 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:32 vm07 bash[118978]: ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0 2026-03-10T13:18:32.591 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:32 vm07 podman[119190]: 2026-03-10 13:18:32.283325737 +0000 UTC m=+0.019268779 container create 92e3b48a89670cd4890815ed736815b8a8c199b2230962c86fe7f3ef5c94d0ac (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0-deactivate, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , ceph=True, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3) 2026-03-10T13:18:32.591 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:32 vm07 podman[119190]: 2026-03-10 13:18:32.331753967 +0000 UTC m=+0.067697019 container init 92e3b48a89670cd4890815ed736815b8a8c199b2230962c86fe7f3ef5c94d0ac (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0-deactivate, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-10T13:18:32.591 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:32 vm07 podman[119190]: 2026-03-10 13:18:32.342206072 +0000 UTC m=+0.078149124 container start 92e3b48a89670cd4890815ed736815b8a8c199b2230962c86fe7f3ef5c94d0ac (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0-deactivate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-10T13:18:32.591 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:32 vm07 podman[119190]: 2026-03-10 13:18:32.343567941 +0000 UTC m=+0.079510993 container attach 92e3b48a89670cd4890815ed736815b8a8c199b2230962c86fe7f3ef5c94d0ac (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0-deactivate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3) 2026-03-10T13:18:32.591 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:32 vm07 podman[119190]: 2026-03-10 13:18:32.275891951 +0000 UTC m=+0.011835014 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T13:18:32.591 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 10 13:18:32 vm07 podman[119190]: 2026-03-10 13:18:32.479781632 +0000 UTC m=+0.215724684 container died 92e3b48a89670cd4890815ed736815b8a8c199b2230962c86fe7f3ef5c94d0ac (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-0-deactivate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-10T13:18:32.617 DEBUG:teuthology.orchestra.run.vm07:> sudo pkill -f 'journalctl -f -n 0 -u ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@osd.0.service' 2026-03-10T13:18:33.071 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T13:18:33.071 INFO:tasks.cephadm.osd.0:Stopped osd.0 2026-03-10T13:18:33.071 INFO:tasks.cephadm.osd.1:Stopping osd.1... 2026-03-10T13:18:33.071 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@osd.1 2026-03-10T13:18:33.341 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 10 13:18:33 vm07 systemd[1]: Stopping Ceph osd.1 for bd98ed20-1c82-11f1-9239-ff903ae4ee6f... 2026-03-10T13:18:33.341 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 10 13:18:33 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-1[63991]: 2026-03-10T13:18:33.195+0000 7f469befe640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.1 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T13:18:33.341 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 10 13:18:33 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-1[63991]: 2026-03-10T13:18:33.195+0000 7f469befe640 -1 osd.1 46 *** Got signal Terminated *** 2026-03-10T13:18:33.341 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 10 13:18:33 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-1[63991]: 2026-03-10T13:18:33.195+0000 7f469befe640 -1 osd.1 46 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T13:18:38.553 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 10 13:18:38 vm07 podman[119311]: 2026-03-10 13:18:38.224422215 +0000 UTC m=+5.046104001 container died 0b71ac435c3f54c19d674c81347d833370c507a1ab9c11a38375791a402ab236 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-10T13:18:38.553 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 10 13:18:38 vm07 podman[119311]: 2026-03-10 13:18:38.349551001 +0000 UTC m=+5.171232787 container remove 0b71ac435c3f54c19d674c81347d833370c507a1ab9c11a38375791a402ab236 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid) 2026-03-10T13:18:38.553 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 10 13:18:38 vm07 bash[119311]: ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-1 2026-03-10T13:18:38.841 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 10 13:18:38 vm07 podman[119390]: 2026-03-10 13:18:38.553945144 +0000 UTC m=+0.019035051 container create 293e9e82518b84ed5c42922323498288be679648c6b5101f6c94c174f7a2e866 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-1-deactivate, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS) 2026-03-10T13:18:38.841 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 10 13:18:38 vm07 podman[119390]: 2026-03-10 13:18:38.601549128 +0000 UTC m=+0.066639035 container init 293e9e82518b84ed5c42922323498288be679648c6b5101f6c94c174f7a2e866 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-1-deactivate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223) 2026-03-10T13:18:38.841 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 10 13:18:38 vm07 podman[119390]: 2026-03-10 13:18:38.610236649 +0000 UTC m=+0.075326556 container start 293e9e82518b84ed5c42922323498288be679648c6b5101f6c94c174f7a2e866 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-1-deactivate, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-10T13:18:38.841 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 10 13:18:38 vm07 podman[119390]: 2026-03-10 13:18:38.612244407 +0000 UTC m=+0.077334323 container attach 293e9e82518b84ed5c42922323498288be679648c6b5101f6c94c174f7a2e866 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-1-deactivate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20260223) 2026-03-10T13:18:38.841 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 10 13:18:38 vm07 podman[119390]: 2026-03-10 13:18:38.546540293 +0000 UTC m=+0.011630210 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T13:18:38.841 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 10 13:18:38 vm07 podman[119390]: 2026-03-10 13:18:38.745833377 +0000 UTC m=+0.210923284 container died 293e9e82518b84ed5c42922323498288be679648c6b5101f6c94c174f7a2e866 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-1-deactivate, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-10T13:18:38.883 DEBUG:teuthology.orchestra.run.vm07:> sudo pkill -f 'journalctl -f -n 0 -u ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@osd.1.service' 2026-03-10T13:18:39.341 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 10 13:18:38 vm07 podman[119390]: 2026-03-10 13:18:38.867272596 +0000 UTC m=+0.332362503 container remove 293e9e82518b84ed5c42922323498288be679648c6b5101f6c94c174f7a2e866 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-1-deactivate, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3) 2026-03-10T13:18:39.341 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 10 13:18:38 vm07 systemd[1]: ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@osd.1.service: Deactivated successfully. 2026-03-10T13:18:39.341 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 10 13:18:38 vm07 systemd[1]: Stopped Ceph osd.1 for bd98ed20-1c82-11f1-9239-ff903ae4ee6f. 2026-03-10T13:18:39.341 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 10 13:18:38 vm07 systemd[1]: ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@osd.1.service: Consumed 15.065s CPU time. 2026-03-10T13:18:39.345 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T13:18:39.345 INFO:tasks.cephadm.osd.1:Stopped osd.1 2026-03-10T13:18:39.345 INFO:tasks.cephadm.osd.2:Stopping osd.2... 2026-03-10T13:18:39.345 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@osd.2 2026-03-10T13:18:39.841 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:39 vm07 systemd[1]: Stopping Ceph osd.2 for bd98ed20-1c82-11f1-9239-ff903ae4ee6f... 2026-03-10T13:18:39.841 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:39 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2[112873]: 2026-03-10T13:18:39.515+0000 7fcdc16d8640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.2 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T13:18:39.841 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:39 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2[112873]: 2026-03-10T13:18:39.515+0000 7fcdc16d8640 -1 osd.2 46 *** Got signal Terminated *** 2026-03-10T13:18:39.841 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:39 vm07 ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2[112873]: 2026-03-10T13:18:39.515+0000 7fcdc16d8640 -1 osd.2 46 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T13:18:44.821 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:44 vm07 podman[119509]: 2026-03-10 13:18:44.546380564 +0000 UTC m=+5.050535674 container died 51764ee4f9ce8f7df8ff67508699181cfc69f777237f7be947d3c1ff2dc8b276 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-10T13:18:44.821 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:44 vm07 podman[119509]: 2026-03-10 13:18:44.670734899 +0000 UTC m=+5.174889999 container remove 51764ee4f9ce8f7df8ff67508699181cfc69f777237f7be947d3c1ff2dc8b276 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-10T13:18:44.821 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:44 vm07 bash[119509]: ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2 2026-03-10T13:18:45.091 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:44 vm07 podman[119584]: 2026-03-10 13:18:44.821752572 +0000 UTC m=+0.018222710 container create 7b6caf104efe85677d2d6961bc3225aa6a70f5226374b9397f038843862cbc91 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2-deactivate, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T13:18:45.091 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:44 vm07 podman[119584]: 2026-03-10 13:18:44.868314546 +0000 UTC m=+0.064784694 container init 7b6caf104efe85677d2d6961bc3225aa6a70f5226374b9397f038843862cbc91 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2-deactivate, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True) 2026-03-10T13:18:45.091 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:44 vm07 podman[119584]: 2026-03-10 13:18:44.873566236 +0000 UTC m=+0.070036374 container start 7b6caf104efe85677d2d6961bc3225aa6a70f5226374b9397f038843862cbc91 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2-deactivate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.41.3) 2026-03-10T13:18:45.091 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:44 vm07 podman[119584]: 2026-03-10 13:18:44.874402733 +0000 UTC m=+0.070872871 container attach 7b6caf104efe85677d2d6961bc3225aa6a70f5226374b9397f038843862cbc91 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2-deactivate, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T13:18:45.091 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:44 vm07 podman[119584]: 2026-03-10 13:18:44.814929832 +0000 UTC m=+0.011399970 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T13:18:45.091 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:45 vm07 conmon[119595]: conmon 7b6caf104efe85677d2d : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7b6caf104efe85677d2d6961bc3225aa6a70f5226374b9397f038843862cbc91.scope/memory.events 2026-03-10T13:18:45.091 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:45 vm07 podman[119584]: 2026-03-10 13:18:45.008316691 +0000 UTC m=+0.204786829 container died 7b6caf104efe85677d2d6961bc3225aa6a70f5226374b9397f038843862cbc91 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2-deactivate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-10T13:18:45.160 DEBUG:teuthology.orchestra.run.vm07:> sudo pkill -f 'journalctl -f -n 0 -u ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@osd.2.service' 2026-03-10T13:18:45.590 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:45 vm07 podman[119584]: 2026-03-10 13:18:45.14146655 +0000 UTC m=+0.337936688 container remove 7b6caf104efe85677d2d6961bc3225aa6a70f5226374b9397f038843862cbc91 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f-osd-2-deactivate, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T13:18:45.591 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:45 vm07 systemd[1]: ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@osd.2.service: Deactivated successfully. 2026-03-10T13:18:45.591 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:45 vm07 systemd[1]: Stopped Ceph osd.2 for bd98ed20-1c82-11f1-9239-ff903ae4ee6f. 2026-03-10T13:18:45.591 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 10 13:18:45 vm07 systemd[1]: ceph-bd98ed20-1c82-11f1-9239-ff903ae4ee6f@osd.2.service: Consumed 1.197s CPU time. 2026-03-10T13:18:45.628 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T13:18:45.628 INFO:tasks.cephadm.osd.2:Stopped osd.2 2026-03-10T13:18:45.628 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid bd98ed20-1c82-11f1-9239-ff903ae4ee6f --force --keep-logs 2026-03-10T13:18:45.811 INFO:teuthology.orchestra.run.vm07.stdout:Deleting cluster with fsid: bd98ed20-1c82-11f1-9239-ff903ae4ee6f 2026-03-10T13:19:01.180 DEBUG:teuthology.orchestra.run.vm07:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T13:19:01.212 INFO:tasks.cephadm:Archiving crash dumps... 2026-03-10T13:19:01.212 DEBUG:teuthology.misc:Transferring archived files from vm07:/var/lib/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/crash to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1043/remote/vm07/crash 2026-03-10T13:19:01.212 DEBUG:teuthology.orchestra.run.vm07:> sudo tar c -f - -C /var/lib/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/crash -- . 2026-03-10T13:19:01.290 INFO:teuthology.orchestra.run.vm07.stderr:tar: /var/lib/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/crash: Cannot open: No such file or directory 2026-03-10T13:19:01.291 INFO:teuthology.orchestra.run.vm07.stderr:tar: Error is not recoverable: exiting now 2026-03-10T13:19:01.292 INFO:tasks.cephadm:Checking cluster log for badness... 2026-03-10T13:19:01.292 DEBUG:teuthology.orchestra.run.vm07:> sudo egrep '\[ERR\]|\[WRN\]|\[SEC\]' /var/log/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/ceph.log | egrep CEPHADM_ | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v CEPHADM_FAILED_DAEMON | head -n 1 2026-03-10T13:19:01.379 INFO:teuthology.orchestra.run.vm07.stdout:2026-03-10T13:18:20.027194+0000 mon.a (mon.0) 486 : cluster [WRN] Health check failed: 1 Cephadm Agent(s) are not reporting. Hosts may be offline (CEPHADM_AGENT_DOWN) 2026-03-10T13:19:01.379 WARNING:tasks.cephadm:Found errors (ERR|WRN|SEC) in cluster log 2026-03-10T13:19:01.379 INFO:tasks.cephadm:Compressing logs... 2026-03-10T13:19:01.379 DEBUG:teuthology.orchestra.run.vm07:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T13:19:01.414 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-10T13:19:01.414 INFO:teuthology.orchestra.run.vm07.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-10T13:19:01.416 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/ceph-mon.a.log 2026-03-10T13:19:01.417 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/ceph.log 2026-03-10T13:19:01.419 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/cephadm.log: /var/log/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/ceph-mon.a.log: 86.3% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-10T13:19:01.419 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/ceph-mgr.a.log 2026-03-10T13:19:01.420 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/ceph.log: 84.1% -- replaced with /var/log/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/ceph.log.gz 2026-03-10T13:19:01.420 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/ceph.audit.log 2026-03-10T13:19:01.425 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/ceph-mgr.a.log: gzip -5 --verbose -- /var/log/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/ceph.cephadm.log 2026-03-10T13:19:01.429 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/ceph.audit.log: 89.3% -- replaced with /var/log/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/ceph.audit.log.gz 2026-03-10T13:19:01.432 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/ceph-volume.log 2026-03-10T13:19:01.433 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/ceph.cephadm.log: 78.3% -- replaced with /var/log/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/ceph.cephadm.log.gz 2026-03-10T13:19:01.438 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/ceph-osd.0.log 2026-03-10T13:19:01.446 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/ceph-volume.log: gzip -5 --verbose -- /var/log/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/ceph-osd.1.log 2026-03-10T13:19:01.453 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/ceph-osd.0.log: gzip -5 --verbose -- /var/log/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/ceph-osd.2.log 2026-03-10T13:19:01.456 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/ceph-osd.1.log: 95.7% -- replaced with /var/log/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/ceph-volume.log.gz 2026-03-10T13:19:01.460 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/tcmu-runner.log 2026-03-10T13:19:01.473 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/ceph-osd.2.log: /var/log/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/tcmu-runner.log: 63.5% -- replaced with /var/log/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/tcmu-runner.log.gz 2026-03-10T13:19:01.502 INFO:teuthology.orchestra.run.vm07.stderr: 89.0% -- replaced with /var/log/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/ceph-mgr.a.log.gz 2026-03-10T13:19:01.532 INFO:teuthology.orchestra.run.vm07.stderr: 91.4% -- replaced with /var/log/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/ceph-mon.a.log.gz 2026-03-10T13:19:01.603 INFO:teuthology.orchestra.run.vm07.stderr: 95.3% -- replaced with /var/log/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/ceph-osd.1.log.gz 2026-03-10T13:19:01.624 INFO:teuthology.orchestra.run.vm07.stderr: 95.2% -- replaced with /var/log/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/ceph-osd.0.log.gz 2026-03-10T13:19:01.636 INFO:teuthology.orchestra.run.vm07.stderr: 95.1% -- replaced with /var/log/ceph/bd98ed20-1c82-11f1-9239-ff903ae4ee6f/ceph-osd.2.log.gz 2026-03-10T13:19:01.637 INFO:teuthology.orchestra.run.vm07.stderr: 2026-03-10T13:19:01.637 INFO:teuthology.orchestra.run.vm07.stderr:real 0m0.238s 2026-03-10T13:19:01.637 INFO:teuthology.orchestra.run.vm07.stderr:user 0m0.382s 2026-03-10T13:19:01.637 INFO:teuthology.orchestra.run.vm07.stderr:sys 0m0.059s 2026-03-10T13:19:01.638 INFO:tasks.cephadm:Archiving logs... 2026-03-10T13:19:01.638 DEBUG:teuthology.misc:Transferring archived files from vm07:/var/log/ceph to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1043/remote/vm07/log 2026-03-10T13:19:01.638 DEBUG:teuthology.orchestra.run.vm07:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-10T13:19:01.743 INFO:tasks.cephadm:Removing cluster... 2026-03-10T13:19:01.743 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid bd98ed20-1c82-11f1-9239-ff903ae4ee6f --force 2026-03-10T13:19:01.919 INFO:teuthology.orchestra.run.vm07.stdout:Deleting cluster with fsid: bd98ed20-1c82-11f1-9239-ff903ae4ee6f 2026-03-10T13:19:02.184 INFO:tasks.cephadm:Removing cephadm ... 2026-03-10T13:19:02.184 DEBUG:teuthology.orchestra.run.vm07:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-10T13:19:02.206 INFO:tasks.cephadm:Teardown complete 2026-03-10T13:19:02.206 DEBUG:teuthology.run_tasks:Unwinding manager install 2026-03-10T13:19:02.209 INFO:teuthology.task.install.util:Removing shipped files: /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer... 2026-03-10T13:19:02.209 DEBUG:teuthology.orchestra.run.vm07:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-10T13:19:02.287 INFO:teuthology.task.install.rpm:Removing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd on rpm system. 2026-03-10T13:19:02.287 DEBUG:teuthology.orchestra.run.vm07:> 2026-03-10T13:19:02.287 DEBUG:teuthology.orchestra.run.vm07:> for d in ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd ; do 2026-03-10T13:19:02.287 DEBUG:teuthology.orchestra.run.vm07:> sudo yum -y remove $d || true 2026-03-10T13:19:02.287 DEBUG:teuthology.orchestra.run.vm07:> done 2026-03-10T13:19:02.633 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-10T13:19:02.635 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-10T13:19:02.635 INFO:teuthology.orchestra.run.vm07.stdout: Package Arch Version Repository Size 2026-03-10T13:19:02.635 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-10T13:19:02.635 INFO:teuthology.orchestra.run.vm07.stdout:Removing: 2026-03-10T13:19:02.635 INFO:teuthology.orchestra.run.vm07.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 39 M 2026-03-10T13:19:02.635 INFO:teuthology.orchestra.run.vm07.stdout:Removing unused dependencies: 2026-03-10T13:19:02.635 INFO:teuthology.orchestra.run.vm07.stdout: mailcap noarch 2.1.49-5.el9 @baseos 78 k 2026-03-10T13:19:02.635 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:19:02.635 INFO:teuthology.orchestra.run.vm07.stdout:Transaction Summary 2026-03-10T13:19:02.635 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-10T13:19:02.635 INFO:teuthology.orchestra.run.vm07.stdout:Remove 2 Packages 2026-03-10T13:19:02.635 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:19:02.635 INFO:teuthology.orchestra.run.vm07.stdout:Freed space: 39 M 2026-03-10T13:19:02.635 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction check 2026-03-10T13:19:02.638 INFO:teuthology.orchestra.run.vm07.stdout:Transaction check succeeded. 2026-03-10T13:19:02.638 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction test 2026-03-10T13:19:02.662 INFO:teuthology.orchestra.run.vm07.stdout:Transaction test succeeded. 2026-03-10T13:19:02.662 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction 2026-03-10T13:19:02.701 INFO:teuthology.orchestra.run.vm07.stdout: Preparing : 1/1 2026-03-10T13:19:02.725 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T13:19:02.725 INFO:teuthology.orchestra.run.vm07.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T13:19:02.725 INFO:teuthology.orchestra.run.vm07.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-10T13:19:02.725 INFO:teuthology.orchestra.run.vm07.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-radosgw.target". 2026-03-10T13:19:02.725 INFO:teuthology.orchestra.run.vm07.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-radosgw.target". 2026-03-10T13:19:02.725 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:19:02.726 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T13:19:02.734 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T13:19:02.749 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T13:19:02.823 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T13:19:02.824 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T13:19:02.874 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T13:19:02.874 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:19:02.874 INFO:teuthology.orchestra.run.vm07.stdout:Removed: 2026-03-10T13:19:02.874 INFO:teuthology.orchestra.run.vm07.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 mailcap-2.1.49-5.el9.noarch 2026-03-10T13:19:02.874 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:19:02.874 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-10T13:19:03.085 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-10T13:19:03.086 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-10T13:19:03.086 INFO:teuthology.orchestra.run.vm07.stdout: Package Arch Version Repository Size 2026-03-10T13:19:03.086 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-10T13:19:03.086 INFO:teuthology.orchestra.run.vm07.stdout:Removing: 2026-03-10T13:19:03.086 INFO:teuthology.orchestra.run.vm07.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 210 M 2026-03-10T13:19:03.086 INFO:teuthology.orchestra.run.vm07.stdout:Removing unused dependencies: 2026-03-10T13:19:03.086 INFO:teuthology.orchestra.run.vm07.stdout: libxslt x86_64 1.1.34-12.el9 @appstream 743 k 2026-03-10T13:19:03.086 INFO:teuthology.orchestra.run.vm07.stdout: socat x86_64 1.7.4.1-8.el9 @appstream 1.1 M 2026-03-10T13:19:03.086 INFO:teuthology.orchestra.run.vm07.stdout: xmlstarlet x86_64 1.6.1-20.el9 @appstream 195 k 2026-03-10T13:19:03.086 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:19:03.086 INFO:teuthology.orchestra.run.vm07.stdout:Transaction Summary 2026-03-10T13:19:03.086 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-10T13:19:03.086 INFO:teuthology.orchestra.run.vm07.stdout:Remove 4 Packages 2026-03-10T13:19:03.086 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:19:03.086 INFO:teuthology.orchestra.run.vm07.stdout:Freed space: 212 M 2026-03-10T13:19:03.086 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction check 2026-03-10T13:19:03.089 INFO:teuthology.orchestra.run.vm07.stdout:Transaction check succeeded. 2026-03-10T13:19:03.089 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction test 2026-03-10T13:19:03.121 INFO:teuthology.orchestra.run.vm07.stdout:Transaction test succeeded. 2026-03-10T13:19:03.121 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction 2026-03-10T13:19:03.176 INFO:teuthology.orchestra.run.vm07.stdout: Preparing : 1/1 2026-03-10T13:19:03.182 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-10T13:19:03.185 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : xmlstarlet-1.6.1-20.el9.x86_64 2/4 2026-03-10T13:19:03.188 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : libxslt-1.1.34-12.el9.x86_64 3/4 2026-03-10T13:19:03.205 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-10T13:19:03.268 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-10T13:19:03.268 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-10T13:19:03.268 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 2/4 2026-03-10T13:19:03.268 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 3/4 2026-03-10T13:19:03.316 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 4/4 2026-03-10T13:19:03.316 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:19:03.316 INFO:teuthology.orchestra.run.vm07.stdout:Removed: 2026-03-10T13:19:03.316 INFO:teuthology.orchestra.run.vm07.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 libxslt-1.1.34-12.el9.x86_64 2026-03-10T13:19:03.316 INFO:teuthology.orchestra.run.vm07.stdout: socat-1.7.4.1-8.el9.x86_64 xmlstarlet-1.6.1-20.el9.x86_64 2026-03-10T13:19:03.316 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:19:03.316 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-10T13:19:03.550 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-10T13:19:03.551 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-10T13:19:03.551 INFO:teuthology.orchestra.run.vm07.stdout: Package Arch Version Repository Size 2026-03-10T13:19:03.551 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-10T13:19:03.551 INFO:teuthology.orchestra.run.vm07.stdout:Removing: 2026-03-10T13:19:03.551 INFO:teuthology.orchestra.run.vm07.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 0 2026-03-10T13:19:03.551 INFO:teuthology.orchestra.run.vm07.stdout:Removing unused dependencies: 2026-03-10T13:19:03.551 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 7.5 M 2026-03-10T13:19:03.551 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 18 M 2026-03-10T13:19:03.551 INFO:teuthology.orchestra.run.vm07.stdout: lua x86_64 5.4.4-4.el9 @appstream 593 k 2026-03-10T13:19:03.551 INFO:teuthology.orchestra.run.vm07.stdout: lua-devel x86_64 5.4.4-4.el9 @crb 49 k 2026-03-10T13:19:03.551 INFO:teuthology.orchestra.run.vm07.stdout: luarocks noarch 3.9.2-5.el9 @epel 692 k 2026-03-10T13:19:03.551 INFO:teuthology.orchestra.run.vm07.stdout: unzip x86_64 6.0-59.el9 @baseos 389 k 2026-03-10T13:19:03.551 INFO:teuthology.orchestra.run.vm07.stdout: zip x86_64 3.0-35.el9 @baseos 724 k 2026-03-10T13:19:03.551 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:19:03.551 INFO:teuthology.orchestra.run.vm07.stdout:Transaction Summary 2026-03-10T13:19:03.551 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-10T13:19:03.551 INFO:teuthology.orchestra.run.vm07.stdout:Remove 8 Packages 2026-03-10T13:19:03.551 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:19:03.551 INFO:teuthology.orchestra.run.vm07.stdout:Freed space: 28 M 2026-03-10T13:19:03.551 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction check 2026-03-10T13:19:03.554 INFO:teuthology.orchestra.run.vm07.stdout:Transaction check succeeded. 2026-03-10T13:19:03.554 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction test 2026-03-10T13:19:03.591 INFO:teuthology.orchestra.run.vm07.stdout:Transaction test succeeded. 2026-03-10T13:19:03.591 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction 2026-03-10T13:19:03.634 INFO:teuthology.orchestra.run.vm07.stdout: Preparing : 1/1 2026-03-10T13:19:03.640 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-10T13:19:03.644 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : luarocks-3.9.2-5.el9.noarch 2/8 2026-03-10T13:19:03.646 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : lua-devel-5.4.4-4.el9.x86_64 3/8 2026-03-10T13:19:03.649 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : zip-3.0-35.el9.x86_64 4/8 2026-03-10T13:19:03.652 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : unzip-6.0-59.el9.x86_64 5/8 2026-03-10T13:19:03.654 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : lua-5.4.4-4.el9.x86_64 6/8 2026-03-10T13:19:03.674 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T13:19:03.674 INFO:teuthology.orchestra.run.vm07.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T13:19:03.674 INFO:teuthology.orchestra.run.vm07.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-10T13:19:03.674 INFO:teuthology.orchestra.run.vm07.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mds.target". 2026-03-10T13:19:03.674 INFO:teuthology.orchestra.run.vm07.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mds.target". 2026-03-10T13:19:03.674 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:19:03.675 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T13:19:03.683 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T13:19:03.707 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T13:19:03.707 INFO:teuthology.orchestra.run.vm07.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T13:19:03.707 INFO:teuthology.orchestra.run.vm07.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-10T13:19:03.707 INFO:teuthology.orchestra.run.vm07.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mon.target". 2026-03-10T13:19:03.707 INFO:teuthology.orchestra.run.vm07.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mon.target". 2026-03-10T13:19:03.707 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:19:03.708 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T13:19:03.799 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T13:19:03.799 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-10T13:19:03.799 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2/8 2026-03-10T13:19:03.799 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 3/8 2026-03-10T13:19:03.799 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : lua-5.4.4-4.el9.x86_64 4/8 2026-03-10T13:19:03.799 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 5/8 2026-03-10T13:19:03.799 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 6/8 2026-03-10T13:19:03.799 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : unzip-6.0-59.el9.x86_64 7/8 2026-03-10T13:19:03.944 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : zip-3.0-35.el9.x86_64 8/8 2026-03-10T13:19:03.944 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:19:03.944 INFO:teuthology.orchestra.run.vm07.stdout:Removed: 2026-03-10T13:19:03.944 INFO:teuthology.orchestra.run.vm07.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:19:03.944 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:19:03.944 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:19:03.944 INFO:teuthology.orchestra.run.vm07.stdout: lua-5.4.4-4.el9.x86_64 2026-03-10T13:19:03.944 INFO:teuthology.orchestra.run.vm07.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-10T13:19:03.944 INFO:teuthology.orchestra.run.vm07.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-10T13:19:03.944 INFO:teuthology.orchestra.run.vm07.stdout: unzip-6.0-59.el9.x86_64 2026-03-10T13:19:03.944 INFO:teuthology.orchestra.run.vm07.stdout: zip-3.0-35.el9.x86_64 2026-03-10T13:19:03.944 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:19:03.944 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-10T13:19:04.166 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-10T13:19:04.172 INFO:teuthology.orchestra.run.vm07.stdout:=========================================================================================== 2026-03-10T13:19:04.172 INFO:teuthology.orchestra.run.vm07.stdout: Package Arch Version Repository Size 2026-03-10T13:19:04.173 INFO:teuthology.orchestra.run.vm07.stdout:=========================================================================================== 2026-03-10T13:19:04.173 INFO:teuthology.orchestra.run.vm07.stdout:Removing: 2026-03-10T13:19:04.173 INFO:teuthology.orchestra.run.vm07.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 23 M 2026-03-10T13:19:04.173 INFO:teuthology.orchestra.run.vm07.stdout:Removing dependent packages: 2026-03-10T13:19:04.173 INFO:teuthology.orchestra.run.vm07.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 431 k 2026-03-10T13:19:04.173 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.4 M 2026-03-10T13:19:04.173 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 806 k 2026-03-10T13:19:04.173 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 88 M 2026-03-10T13:19:04.173 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 66 M 2026-03-10T13:19:04.173 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 563 k 2026-03-10T13:19:04.173 INFO:teuthology.orchestra.run.vm07.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 59 M 2026-03-10T13:19:04.173 INFO:teuthology.orchestra.run.vm07.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.4 M 2026-03-10T13:19:04.173 INFO:teuthology.orchestra.run.vm07.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-10T13:19:04.173 INFO:teuthology.orchestra.run.vm07.stdout:Removing unused dependencies: 2026-03-10T13:19:04.173 INFO:teuthology.orchestra.run.vm07.stdout: abseil-cpp x86_64 20211102.0-4.el9 @epel 1.9 M 2026-03-10T13:19:04.173 INFO:teuthology.orchestra.run.vm07.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 85 M 2026-03-10T13:19:04.173 INFO:teuthology.orchestra.run.vm07.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 628 k 2026-03-10T13:19:04.173 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.5 M 2026-03-10T13:19:04.173 INFO:teuthology.orchestra.run.vm07.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 52 k 2026-03-10T13:19:04.173 INFO:teuthology.orchestra.run.vm07.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 138 k 2026-03-10T13:19:04.173 INFO:teuthology.orchestra.run.vm07.stdout: cryptsetup x86_64 2.8.1-3.el9 @baseos 770 k 2026-03-10T13:19:04.173 INFO:teuthology.orchestra.run.vm07.stdout: flexiblas x86_64 3.0.4-9.el9 @appstream 68 k 2026-03-10T13:19:04.173 INFO:teuthology.orchestra.run.vm07.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 @appstream 11 M 2026-03-10T13:19:04.173 INFO:teuthology.orchestra.run.vm07.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 @appstream 39 k 2026-03-10T13:19:04.173 INFO:teuthology.orchestra.run.vm07.stdout: gperftools-libs x86_64 2.9.1-3.el9 @epel 1.4 M 2026-03-10T13:19:04.173 INFO:teuthology.orchestra.run.vm07.stdout: grpc-data noarch 1.46.7-10.el9 @epel 13 k 2026-03-10T13:19:04.173 INFO:teuthology.orchestra.run.vm07.stdout: ledmon-libs x86_64 1.1.0-3.el9 @baseos 80 k 2026-03-10T13:19:04.173 INFO:teuthology.orchestra.run.vm07.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 425 k 2026-03-10T13:19:04.173 INFO:teuthology.orchestra.run.vm07.stdout: libconfig x86_64 1.7.2-9.el9 @baseos 220 k 2026-03-10T13:19:04.173 INFO:teuthology.orchestra.run.vm07.stdout: libgfortran x86_64 11.5.0-14.el9 @baseos 2.8 M 2026-03-10T13:19:04.173 INFO:teuthology.orchestra.run.vm07.stdout: liboath x86_64 2.6.12-1.el9 @epel 94 k 2026-03-10T13:19:04.173 INFO:teuthology.orchestra.run.vm07.stdout: libquadmath x86_64 11.5.0-14.el9 @baseos 330 k 2026-03-10T13:19:04.173 INFO:teuthology.orchestra.run.vm07.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.6 M 2026-03-10T13:19:04.173 INFO:teuthology.orchestra.run.vm07.stdout: libstoragemgmt x86_64 1.10.1-1.el9 @appstream 685 k 2026-03-10T13:19:04.173 INFO:teuthology.orchestra.run.vm07.stdout: libunwind x86_64 1.6.2-1.el9 @epel 170 k 2026-03-10T13:19:04.173 INFO:teuthology.orchestra.run.vm07.stdout: openblas x86_64 0.3.29-1.el9 @appstream 112 k 2026-03-10T13:19:04.173 INFO:teuthology.orchestra.run.vm07.stdout: openblas-openmp x86_64 0.3.29-1.el9 @appstream 46 M 2026-03-10T13:19:04.174 INFO:teuthology.orchestra.run.vm07.stdout: pciutils x86_64 3.7.0-7.el9 @baseos 216 k 2026-03-10T13:19:04.174 INFO:teuthology.orchestra.run.vm07.stdout: protobuf x86_64 3.14.0-17.el9 @appstream 3.5 M 2026-03-10T13:19:04.174 INFO:teuthology.orchestra.run.vm07.stdout: protobuf-compiler x86_64 3.14.0-17.el9 @crb 2.9 M 2026-03-10T13:19:04.174 INFO:teuthology.orchestra.run.vm07.stdout: python3-asyncssh noarch 2.13.2-5.el9 @epel 3.9 M 2026-03-10T13:19:04.174 INFO:teuthology.orchestra.run.vm07.stdout: python3-autocommand noarch 2.2.2-8.el9 @epel 82 k 2026-03-10T13:19:04.174 INFO:teuthology.orchestra.run.vm07.stdout: python3-babel noarch 2.9.1-2.el9 @appstream 27 M 2026-03-10T13:19:04.174 INFO:teuthology.orchestra.run.vm07.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 @epel 254 k 2026-03-10T13:19:04.174 INFO:teuthology.orchestra.run.vm07.stdout: python3-bcrypt x86_64 3.2.2-1.el9 @epel 87 k 2026-03-10T13:19:04.174 INFO:teuthology.orchestra.run.vm07.stdout: python3-cachetools noarch 4.2.4-1.el9 @epel 93 k 2026-03-10T13:19:04.174 INFO:teuthology.orchestra.run.vm07.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 702 k 2026-03-10T13:19:04.174 INFO:teuthology.orchestra.run.vm07.stdout: python3-certifi noarch 2023.05.07-4.el9 @epel 6.3 k 2026-03-10T13:19:04.174 INFO:teuthology.orchestra.run.vm07.stdout: python3-cffi x86_64 1.14.5-5.el9 @baseos 1.0 M 2026-03-10T13:19:04.174 INFO:teuthology.orchestra.run.vm07.stdout: python3-chardet noarch 4.0.0-5.el9 @anaconda 1.4 M 2026-03-10T13:19:04.174 INFO:teuthology.orchestra.run.vm07.stdout: python3-cheroot noarch 10.0.1-4.el9 @epel 682 k 2026-03-10T13:19:04.174 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy noarch 18.6.1-2.el9 @epel 1.1 M 2026-03-10T13:19:04.174 INFO:teuthology.orchestra.run.vm07.stdout: python3-cryptography x86_64 36.0.1-5.el9 @baseos 4.5 M 2026-03-10T13:19:04.174 INFO:teuthology.orchestra.run.vm07.stdout: python3-devel x86_64 3.9.25-3.el9 @appstream 765 k 2026-03-10T13:19:04.174 INFO:teuthology.orchestra.run.vm07.stdout: python3-google-auth noarch 1:2.45.0-1.el9 @epel 1.4 M 2026-03-10T13:19:04.174 INFO:teuthology.orchestra.run.vm07.stdout: python3-grpcio x86_64 1.46.7-10.el9 @epel 6.7 M 2026-03-10T13:19:04.174 INFO:teuthology.orchestra.run.vm07.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 @epel 418 k 2026-03-10T13:19:04.174 INFO:teuthology.orchestra.run.vm07.stdout: python3-idna noarch 2.10-7.el9.1 @anaconda 513 k 2026-03-10T13:19:04.174 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco noarch 8.2.1-3.el9 @epel 3.7 k 2026-03-10T13:19:04.174 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 @epel 24 k 2026-03-10T13:19:04.174 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 @epel 55 k 2026-03-10T13:19:04.174 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-context noarch 6.0.1-3.el9 @epel 31 k 2026-03-10T13:19:04.174 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 @epel 33 k 2026-03-10T13:19:04.174 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-text noarch 4.0.0-2.el9 @epel 51 k 2026-03-10T13:19:04.174 INFO:teuthology.orchestra.run.vm07.stdout: python3-jinja2 noarch 2.11.3-8.el9 @appstream 1.1 M 2026-03-10T13:19:04.174 INFO:teuthology.orchestra.run.vm07.stdout: python3-jsonpatch noarch 1.21-16.el9 @koji-override-0 55 k 2026-03-10T13:19:04.174 INFO:teuthology.orchestra.run.vm07.stdout: python3-jsonpointer noarch 2.0-4.el9 @koji-override-0 34 k 2026-03-10T13:19:04.174 INFO:teuthology.orchestra.run.vm07.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 @epel 21 M 2026-03-10T13:19:04.174 INFO:teuthology.orchestra.run.vm07.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 @appstream 832 k 2026-03-10T13:19:04.174 INFO:teuthology.orchestra.run.vm07.stdout: python3-logutils noarch 0.3.5-21.el9 @epel 126 k 2026-03-10T13:19:04.174 INFO:teuthology.orchestra.run.vm07.stdout: python3-mako noarch 1.1.4-6.el9 @appstream 534 k 2026-03-10T13:19:04.174 INFO:teuthology.orchestra.run.vm07.stdout: python3-markupsafe x86_64 1.1.1-12.el9 @appstream 60 k 2026-03-10T13:19:04.174 INFO:teuthology.orchestra.run.vm07.stdout: python3-more-itertools noarch 8.12.0-2.el9 @epel 378 k 2026-03-10T13:19:04.174 INFO:teuthology.orchestra.run.vm07.stdout: python3-natsort noarch 7.1.1-5.el9 @epel 215 k 2026-03-10T13:19:04.174 INFO:teuthology.orchestra.run.vm07.stdout: python3-numpy x86_64 1:1.23.5-2.el9 @appstream 30 M 2026-03-10T13:19:04.174 INFO:teuthology.orchestra.run.vm07.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 @appstream 1.7 M 2026-03-10T13:19:04.175 INFO:teuthology.orchestra.run.vm07.stdout: python3-oauthlib noarch 3.1.1-5.el9 @koji-override-0 888 k 2026-03-10T13:19:04.175 INFO:teuthology.orchestra.run.vm07.stdout: python3-packaging noarch 20.9-5.el9 @appstream 248 k 2026-03-10T13:19:04.175 INFO:teuthology.orchestra.run.vm07.stdout: python3-pecan noarch 1.4.2-3.el9 @epel 1.3 M 2026-03-10T13:19:04.175 INFO:teuthology.orchestra.run.vm07.stdout: python3-ply noarch 3.11-14.el9 @baseos 430 k 2026-03-10T13:19:04.175 INFO:teuthology.orchestra.run.vm07.stdout: python3-portend noarch 3.1.0-2.el9 @epel 20 k 2026-03-10T13:19:04.175 INFO:teuthology.orchestra.run.vm07.stdout: python3-prettytable noarch 0.7.2-27.el9 @koji-override-0 166 k 2026-03-10T13:19:04.175 INFO:teuthology.orchestra.run.vm07.stdout: python3-protobuf noarch 3.14.0-17.el9 @appstream 1.4 M 2026-03-10T13:19:04.175 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 @epel 389 k 2026-03-10T13:19:04.175 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyasn1 noarch 0.4.8-7.el9 @appstream 622 k 2026-03-10T13:19:04.175 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 @appstream 1.0 M 2026-03-10T13:19:04.175 INFO:teuthology.orchestra.run.vm07.stdout: python3-pycparser noarch 2.20-6.el9 @baseos 745 k 2026-03-10T13:19:04.175 INFO:teuthology.orchestra.run.vm07.stdout: python3-pysocks noarch 1.7.1-12.el9 @anaconda 88 k 2026-03-10T13:19:04.175 INFO:teuthology.orchestra.run.vm07.stdout: python3-pytz noarch 2021.1-5.el9 @koji-override-0 176 k 2026-03-10T13:19:04.175 INFO:teuthology.orchestra.run.vm07.stdout: python3-repoze-lru noarch 0.7-16.el9 @epel 83 k 2026-03-10T13:19:04.175 INFO:teuthology.orchestra.run.vm07.stdout: python3-requests noarch 2.25.1-10.el9 @baseos 405 k 2026-03-10T13:19:04.175 INFO:teuthology.orchestra.run.vm07.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 @appstream 119 k 2026-03-10T13:19:04.175 INFO:teuthology.orchestra.run.vm07.stdout: python3-routes noarch 2.5.1-5.el9 @epel 459 k 2026-03-10T13:19:04.175 INFO:teuthology.orchestra.run.vm07.stdout: python3-rsa noarch 4.9-2.el9 @epel 202 k 2026-03-10T13:19:04.175 INFO:teuthology.orchestra.run.vm07.stdout: python3-scipy x86_64 1.9.3-2.el9 @appstream 76 M 2026-03-10T13:19:04.175 INFO:teuthology.orchestra.run.vm07.stdout: python3-tempora noarch 5.0.0-2.el9 @epel 96 k 2026-03-10T13:19:04.175 INFO:teuthology.orchestra.run.vm07.stdout: python3-toml noarch 0.10.2-6.el9 @appstream 99 k 2026-03-10T13:19:04.175 INFO:teuthology.orchestra.run.vm07.stdout: python3-typing-extensions noarch 4.15.0-1.el9 @epel 447 k 2026-03-10T13:19:04.175 INFO:teuthology.orchestra.run.vm07.stdout: python3-urllib3 noarch 1.26.5-7.el9 @baseos 746 k 2026-03-10T13:19:04.175 INFO:teuthology.orchestra.run.vm07.stdout: python3-webob noarch 1.8.8-2.el9 @epel 1.2 M 2026-03-10T13:19:04.175 INFO:teuthology.orchestra.run.vm07.stdout: python3-websocket-client noarch 1.2.3-2.el9 @epel 319 k 2026-03-10T13:19:04.175 INFO:teuthology.orchestra.run.vm07.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 @epel 1.9 M 2026-03-10T13:19:04.175 INFO:teuthology.orchestra.run.vm07.stdout: python3-zc-lockfile noarch 2.0-10.el9 @epel 35 k 2026-03-10T13:19:04.175 INFO:teuthology.orchestra.run.vm07.stdout: qatlib x86_64 25.08.0-2.el9 @appstream 639 k 2026-03-10T13:19:04.175 INFO:teuthology.orchestra.run.vm07.stdout: qatlib-service x86_64 25.08.0-2.el9 @appstream 69 k 2026-03-10T13:19:04.175 INFO:teuthology.orchestra.run.vm07.stdout: qatzip-libs x86_64 1.3.1-1.el9 @appstream 148 k 2026-03-10T13:19:04.175 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:19:04.175 INFO:teuthology.orchestra.run.vm07.stdout:Transaction Summary 2026-03-10T13:19:04.175 INFO:teuthology.orchestra.run.vm07.stdout:=========================================================================================== 2026-03-10T13:19:04.175 INFO:teuthology.orchestra.run.vm07.stdout:Remove 102 Packages 2026-03-10T13:19:04.175 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:19:04.175 INFO:teuthology.orchestra.run.vm07.stdout:Freed space: 613 M 2026-03-10T13:19:04.175 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction check 2026-03-10T13:19:04.206 INFO:teuthology.orchestra.run.vm07.stdout:Transaction check succeeded. 2026-03-10T13:19:04.207 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction test 2026-03-10T13:19:04.368 INFO:teuthology.orchestra.run.vm07.stdout:Transaction test succeeded. 2026-03-10T13:19:04.368 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction 2026-03-10T13:19:04.547 INFO:teuthology.orchestra.run.vm07.stdout: Preparing : 1/1 2026-03-10T13:19:04.547 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/102 2026-03-10T13:19:04.556 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/102 2026-03-10T13:19:04.580 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T13:19:04.580 INFO:teuthology.orchestra.run.vm07.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T13:19:04.580 INFO:teuthology.orchestra.run.vm07.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-10T13:19:04.580 INFO:teuthology.orchestra.run.vm07.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mgr.target". 2026-03-10T13:19:04.580 INFO:teuthology.orchestra.run.vm07.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mgr.target". 2026-03-10T13:19:04.580 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:19:04.581 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T13:19:04.596 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T13:19:04.620 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 3/102 2026-03-10T13:19:04.620 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/102 2026-03-10T13:19:04.687 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/102 2026-03-10T13:19:04.697 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-kubernetes-1:26.1.0-3.el9.noarch 5/102 2026-03-10T13:19:04.702 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-requests-oauthlib-1.3.0-12.el9.noarch 6/102 2026-03-10T13:19:04.702 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-10T13:19:04.715 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-10T13:19:04.723 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-cherrypy-18.6.1-2.el9.noarch 8/102 2026-03-10T13:19:04.727 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-cheroot-10.0.1-4.el9.noarch 9/102 2026-03-10T13:19:04.736 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-grpcio-tools-1.46.7-10.el9.x86_64 10/102 2026-03-10T13:19:04.740 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-grpcio-1.46.7-10.el9.x86_64 11/102 2026-03-10T13:19:04.762 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T13:19:04.762 INFO:teuthology.orchestra.run.vm07.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T13:19:04.762 INFO:teuthology.orchestra.run.vm07.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-10T13:19:04.762 INFO:teuthology.orchestra.run.vm07.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-osd.target". 2026-03-10T13:19:04.762 INFO:teuthology.orchestra.run.vm07.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-osd.target". 2026-03-10T13:19:04.762 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:19:04.763 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T13:19:04.771 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T13:19:04.786 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-10T13:19:04.786 INFO:teuthology.orchestra.run.vm07.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T13:19:04.786 INFO:teuthology.orchestra.run.vm07.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-10T13:19:04.786 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:19:04.796 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-10T13:19:04.804 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-10T13:19:04.807 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-jaraco-collections-3.0.0-8.el9.noarch 14/102 2026-03-10T13:19:04.812 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-jaraco-text-4.0.0-2.el9.noarch 15/102 2026-03-10T13:19:04.817 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-jinja2-2.11.3-8.el9.noarch 16/102 2026-03-10T13:19:04.828 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-requests-2.25.1-10.el9.noarch 17/102 2026-03-10T13:19:04.841 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-google-auth-1:2.45.0-1.el9.noarch 18/102 2026-03-10T13:19:04.847 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-pecan-1.4.2-3.el9.noarch 19/102 2026-03-10T13:19:04.859 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-rsa-4.9-2.el9.noarch 20/102 2026-03-10T13:19:04.867 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-pyasn1-modules-0.4.8-7.el9.noarch 21/102 2026-03-10T13:19:04.900 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-urllib3-1.26.5-7.el9.noarch 22/102 2026-03-10T13:19:04.909 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-babel-2.9.1-2.el9.noarch 23/102 2026-03-10T13:19:04.913 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-jaraco-classes-3.2.1-5.el9.noarch 24/102 2026-03-10T13:19:04.923 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-pyOpenSSL-21.0.0-1.el9.noarch 25/102 2026-03-10T13:19:04.929 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-asyncssh-2.13.2-5.el9.noarch 26/102 2026-03-10T13:19:04.930 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/102 2026-03-10T13:19:04.937 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/102 2026-03-10T13:19:05.049 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-jsonpatch-1.21-16.el9.noarch 28/102 2026-03-10T13:19:05.065 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-scipy-1.9.3-2.el9.x86_64 29/102 2026-03-10T13:19:05.078 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-10T13:19:05.078 INFO:teuthology.orchestra.run.vm07.stdout:Removed "/etc/systemd/system/multi-user.target.wants/libstoragemgmt.service". 2026-03-10T13:19:05.078 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:19:05.079 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-10T13:19:05.117 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-10T13:19:05.136 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 31/102 2026-03-10T13:19:05.142 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-cryptography-36.0.1-5.el9.x86_64 32/102 2026-03-10T13:19:05.146 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : protobuf-compiler-3.14.0-17.el9.x86_64 33/102 2026-03-10T13:19:05.148 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-bcrypt-3.2.2-1.el9.x86_64 34/102 2026-03-10T13:19:05.173 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-10T13:19:05.173 INFO:teuthology.orchestra.run.vm07.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T13:19:05.173 INFO:teuthology.orchestra.run.vm07.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-10T13:19:05.173 INFO:teuthology.orchestra.run.vm07.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target". 2026-03-10T13:19:05.173 INFO:teuthology.orchestra.run.vm07.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target". 2026-03-10T13:19:05.173 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:19:05.174 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-10T13:19:05.189 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-10T13:19:05.195 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-mako-1.1.4-6.el9.noarch 36/102 2026-03-10T13:19:05.198 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-jaraco-context-6.0.1-3.el9.noarch 37/102 2026-03-10T13:19:05.201 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-portend-3.1.0-2.el9.noarch 38/102 2026-03-10T13:19:05.204 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-tempora-5.0.0-2.el9.noarch 39/102 2026-03-10T13:19:05.208 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-jaraco-functools-3.5.0-2.el9.noarch 40/102 2026-03-10T13:19:05.212 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-routes-2.5.1-5.el9.noarch 41/102 2026-03-10T13:19:05.217 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-cffi-1.14.5-5.el9.x86_64 42/102 2026-03-10T13:19:05.273 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-pycparser-2.20-6.el9.noarch 43/102 2026-03-10T13:19:05.285 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-numpy-1:1.23.5-2.el9.x86_64 44/102 2026-03-10T13:19:05.288 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : flexiblas-netlib-3.0.4-9.el9.x86_64 45/102 2026-03-10T13:19:05.289 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 46/102 2026-03-10T13:19:05.291 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : openblas-openmp-0.3.29-1.el9.x86_64 47/102 2026-03-10T13:19:05.296 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : libgfortran-11.5.0-14.el9.x86_64 48/102 2026-03-10T13:19:05.299 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 49/102 2026-03-10T13:19:05.325 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-10T13:19:05.325 INFO:teuthology.orchestra.run.vm07.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T13:19:05.325 INFO:teuthology.orchestra.run.vm07.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-10T13:19:05.325 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:19:05.326 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-10T13:19:05.337 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-10T13:19:05.340 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : openblas-0.3.29-1.el9.x86_64 51/102 2026-03-10T13:19:05.343 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : flexiblas-3.0.4-9.el9.x86_64 52/102 2026-03-10T13:19:05.346 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-ply-3.11-14.el9.noarch 53/102 2026-03-10T13:19:05.349 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-repoze-lru-0.7-16.el9.noarch 54/102 2026-03-10T13:19:05.351 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-jaraco-8.2.1-3.el9.noarch 55/102 2026-03-10T13:19:05.354 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-more-itertools-8.12.0-2.el9.noarch 56/102 2026-03-10T13:19:05.357 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-toml-0.10.2-6.el9.noarch 57/102 2026-03-10T13:19:05.360 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-pytz-2021.1-5.el9.noarch 58/102 2026-03-10T13:19:05.369 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-backports-tarfile-1.2.0-1.el9.noarch 59/102 2026-03-10T13:19:05.373 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-devel-3.9.25-3.el9.x86_64 60/102 2026-03-10T13:19:05.375 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-jsonpointer-2.0-4.el9.noarch 61/102 2026-03-10T13:19:05.378 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-typing-extensions-4.15.0-1.el9.noarch 62/102 2026-03-10T13:19:05.381 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-idna-2.10-7.el9.1.noarch 63/102 2026-03-10T13:19:05.387 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-pysocks-1.7.1-12.el9.noarch 64/102 2026-03-10T13:19:05.392 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-pyasn1-0.4.8-7.el9.noarch 65/102 2026-03-10T13:19:05.398 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-logutils-0.3.5-21.el9.noarch 66/102 2026-03-10T13:19:05.403 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-webob-1.8.8-2.el9.noarch 67/102 2026-03-10T13:19:05.411 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-cachetools-4.2.4-1.el9.noarch 68/102 2026-03-10T13:19:05.415 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-chardet-4.0.0-5.el9.noarch 69/102 2026-03-10T13:19:05.418 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-autocommand-2.2.2-8.el9.noarch 70/102 2026-03-10T13:19:05.421 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-packaging-20.9-5.el9.noarch 71/102 2026-03-10T13:19:05.428 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : grpc-data-1.46.7-10.el9.noarch 72/102 2026-03-10T13:19:05.432 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-protobuf-3.14.0-17.el9.noarch 73/102 2026-03-10T13:19:05.436 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-zc-lockfile-2.0-10.el9.noarch 74/102 2026-03-10T13:19:05.446 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-natsort-7.1.1-5.el9.noarch 75/102 2026-03-10T13:19:05.455 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-oauthlib-3.1.1-5.el9.noarch 76/102 2026-03-10T13:19:05.460 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-websocket-client-1.2.3-2.el9.noarch 77/102 2026-03-10T13:19:05.463 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-certifi-2023.05.07-4.el9.noarch 78/102 2026-03-10T13:19:05.464 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 79/102 2026-03-10T13:19:05.471 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 80/102 2026-03-10T13:19:05.475 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-werkzeug-2.0.3-3.el9.1.noarch 81/102 2026-03-10T13:19:05.496 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-10T13:19:05.496 INFO:teuthology.orchestra.run.vm07.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-crash.service". 2026-03-10T13:19:05.496 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:19:05.503 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-10T13:19:05.527 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-10T13:19:05.527 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 83/102 2026-03-10T13:19:05.537 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 83/102 2026-03-10T13:19:05.543 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : qatzip-libs-1.3.1-1.el9.x86_64 84/102 2026-03-10T13:19:05.546 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 85/102 2026-03-10T13:19:05.548 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-prettytable-0.7.2-27.el9.noarch 86/102 2026-03-10T13:19:05.548 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 87/102 2026-03-10T13:19:11.624 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 87/102 2026-03-10T13:19:11.624 INFO:teuthology.orchestra.run.vm07.stdout:skipping the directory /sys 2026-03-10T13:19:11.624 INFO:teuthology.orchestra.run.vm07.stdout:skipping the directory /proc 2026-03-10T13:19:11.624 INFO:teuthology.orchestra.run.vm07.stdout:skipping the directory /mnt 2026-03-10T13:19:11.624 INFO:teuthology.orchestra.run.vm07.stdout:skipping the directory /var/tmp 2026-03-10T13:19:11.624 INFO:teuthology.orchestra.run.vm07.stdout:skipping the directory /home 2026-03-10T13:19:11.624 INFO:teuthology.orchestra.run.vm07.stdout:skipping the directory /root 2026-03-10T13:19:11.624 INFO:teuthology.orchestra.run.vm07.stdout:skipping the directory /tmp 2026-03-10T13:19:11.624 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:19:11.633 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : qatlib-25.08.0-2.el9.x86_64 88/102 2026-03-10T13:19:11.652 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-10T13:19:11.652 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-10T13:19:11.660 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-10T13:19:11.663 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : gperftools-libs-2.9.1-3.el9.x86_64 90/102 2026-03-10T13:19:11.665 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : libunwind-1.6.2-1.el9.x86_64 91/102 2026-03-10T13:19:11.668 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : pciutils-3.7.0-7.el9.x86_64 92/102 2026-03-10T13:19:11.670 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : liboath-2.6.12-1.el9.x86_64 93/102 2026-03-10T13:19:11.670 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 94/102 2026-03-10T13:19:11.683 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 94/102 2026-03-10T13:19:11.686 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ledmon-libs-1.1.0-3.el9.x86_64 95/102 2026-03-10T13:19:11.688 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : libquadmath-11.5.0-14.el9.x86_64 96/102 2026-03-10T13:19:11.691 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-markupsafe-1.1.1-12.el9.x86_64 97/102 2026-03-10T13:19:11.694 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : protobuf-3.14.0-17.el9.x86_64 98/102 2026-03-10T13:19:11.699 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : libconfig-1.7.2-9.el9.x86_64 99/102 2026-03-10T13:19:11.708 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : cryptsetup-2.8.1-3.el9.x86_64 100/102 2026-03-10T13:19:11.713 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : abseil-cpp-20211102.0-4.el9.x86_64 101/102 2026-03-10T13:19:11.713 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-10T13:19:11.833 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-10T13:19:11.833 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 1/102 2026-03-10T13:19:11.833 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T13:19:11.833 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/102 2026-03-10T13:19:11.833 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 4/102 2026-03-10T13:19:11.833 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/102 2026-03-10T13:19:11.833 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 6/102 2026-03-10T13:19:11.833 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-10T13:19:11.833 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 8/102 2026-03-10T13:19:11.833 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 9/102 2026-03-10T13:19:11.833 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 10/102 2026-03-10T13:19:11.833 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 11/102 2026-03-10T13:19:11.833 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T13:19:11.833 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 13/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 14/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 15/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 16/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 17/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 18/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 19/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 20/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 21/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 22/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 23/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 24/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 25/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 26/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 27/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 28/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 29/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 30/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 31/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 32/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 33/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 34/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 35/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 36/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 37/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 38/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 39/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 40/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 41/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 42/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 43/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-chardet-4.0.0-5.el9.noarch 45/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 46/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 47/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 48/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 49/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 50/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 51/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 52/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-idna-2.10-7.el9.1.noarch 53/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 54/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 55/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 56/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 57/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 58/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 59/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 60/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jsonpatch-1.21-16.el9.noarch 61/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jsonpointer-2.0-4.el9.noarch 62/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 63/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 64/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 65/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 66/102 2026-03-10T13:19:11.834 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 67/102 2026-03-10T13:19:11.835 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 68/102 2026-03-10T13:19:11.835 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 69/102 2026-03-10T13:19:11.835 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 70/102 2026-03-10T13:19:11.835 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 71/102 2026-03-10T13:19:11.835 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-oauthlib-3.1.1-5.el9.noarch 72/102 2026-03-10T13:19:11.835 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 73/102 2026-03-10T13:19:11.835 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 74/102 2026-03-10T13:19:11.835 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-ply-3.11-14.el9.noarch 75/102 2026-03-10T13:19:11.835 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 76/102 2026-03-10T13:19:11.835 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-prettytable-0.7.2-27.el9.noarch 77/102 2026-03-10T13:19:11.835 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 78/102 2026-03-10T13:19:11.835 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 79/102 2026-03-10T13:19:11.835 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 80/102 2026-03-10T13:19:11.835 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 81/102 2026-03-10T13:19:11.835 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 82/102 2026-03-10T13:19:11.835 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-pysocks-1.7.1-12.el9.noarch 83/102 2026-03-10T13:19:11.835 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-pytz-2021.1-5.el9.noarch 84/102 2026-03-10T13:19:11.835 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 85/102 2026-03-10T13:19:11.835 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 86/102 2026-03-10T13:19:11.835 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 87/102 2026-03-10T13:19:11.835 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 88/102 2026-03-10T13:19:11.835 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 89/102 2026-03-10T13:19:11.835 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 90/102 2026-03-10T13:19:11.835 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 91/102 2026-03-10T13:19:11.835 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 92/102 2026-03-10T13:19:11.835 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 93/102 2026-03-10T13:19:11.835 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 94/102 2026-03-10T13:19:11.835 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 95/102 2026-03-10T13:19:11.835 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 96/102 2026-03-10T13:19:11.835 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 97/102 2026-03-10T13:19:11.835 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 98/102 2026-03-10T13:19:11.835 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 99/102 2026-03-10T13:19:11.835 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 100/102 2026-03-10T13:19:11.835 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 101/102 2026-03-10T13:19:11.919 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-10T13:19:11.919 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:19:11.919 INFO:teuthology.orchestra.run.vm07.stdout:Removed: 2026-03-10T13:19:11.919 INFO:teuthology.orchestra.run.vm07.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: python3-chardet-4.0.0-5.el9.noarch 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: python3-idna-2.10-7.el9.1.noarch 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-10T13:19:11.920 INFO:teuthology.orchestra.run.vm07.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-10T13:19:11.921 INFO:teuthology.orchestra.run.vm07.stdout: python3-jsonpatch-1.21-16.el9.noarch 2026-03-10T13:19:11.921 INFO:teuthology.orchestra.run.vm07.stdout: python3-jsonpointer-2.0-4.el9.noarch 2026-03-10T13:19:11.921 INFO:teuthology.orchestra.run.vm07.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-10T13:19:11.921 INFO:teuthology.orchestra.run.vm07.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T13:19:11.921 INFO:teuthology.orchestra.run.vm07.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-10T13:19:11.921 INFO:teuthology.orchestra.run.vm07.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-10T13:19:11.921 INFO:teuthology.orchestra.run.vm07.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-10T13:19:11.921 INFO:teuthology.orchestra.run.vm07.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-10T13:19:11.921 INFO:teuthology.orchestra.run.vm07.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-10T13:19:11.921 INFO:teuthology.orchestra.run.vm07.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-10T13:19:11.921 INFO:teuthology.orchestra.run.vm07.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-10T13:19:11.921 INFO:teuthology.orchestra.run.vm07.stdout: python3-oauthlib-3.1.1-5.el9.noarch 2026-03-10T13:19:11.921 INFO:teuthology.orchestra.run.vm07.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-10T13:19:11.921 INFO:teuthology.orchestra.run.vm07.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-10T13:19:11.921 INFO:teuthology.orchestra.run.vm07.stdout: python3-ply-3.11-14.el9.noarch 2026-03-10T13:19:11.921 INFO:teuthology.orchestra.run.vm07.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-10T13:19:11.921 INFO:teuthology.orchestra.run.vm07.stdout: python3-prettytable-0.7.2-27.el9.noarch 2026-03-10T13:19:11.921 INFO:teuthology.orchestra.run.vm07.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-10T13:19:11.921 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-10T13:19:11.921 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-10T13:19:11.921 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-10T13:19:11.921 INFO:teuthology.orchestra.run.vm07.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-10T13:19:11.921 INFO:teuthology.orchestra.run.vm07.stdout: python3-pysocks-1.7.1-12.el9.noarch 2026-03-10T13:19:11.921 INFO:teuthology.orchestra.run.vm07.stdout: python3-pytz-2021.1-5.el9.noarch 2026-03-10T13:19:11.921 INFO:teuthology.orchestra.run.vm07.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-10T13:19:11.921 INFO:teuthology.orchestra.run.vm07.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-10T13:19:11.921 INFO:teuthology.orchestra.run.vm07.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-10T13:19:11.921 INFO:teuthology.orchestra.run.vm07.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-10T13:19:11.921 INFO:teuthology.orchestra.run.vm07.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-10T13:19:11.921 INFO:teuthology.orchestra.run.vm07.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-10T13:19:11.921 INFO:teuthology.orchestra.run.vm07.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-10T13:19:11.921 INFO:teuthology.orchestra.run.vm07.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-10T13:19:11.921 INFO:teuthology.orchestra.run.vm07.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-10T13:19:11.921 INFO:teuthology.orchestra.run.vm07.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-10T13:19:11.921 INFO:teuthology.orchestra.run.vm07.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-10T13:19:11.921 INFO:teuthology.orchestra.run.vm07.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-10T13:19:11.921 INFO:teuthology.orchestra.run.vm07.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-10T13:19:11.921 INFO:teuthology.orchestra.run.vm07.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-10T13:19:11.921 INFO:teuthology.orchestra.run.vm07.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-10T13:19:11.921 INFO:teuthology.orchestra.run.vm07.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-10T13:19:11.921 INFO:teuthology.orchestra.run.vm07.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-10T13:19:11.921 INFO:teuthology.orchestra.run.vm07.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:19:11.921 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:19:11.921 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-10T13:19:12.150 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-10T13:19:12.150 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-10T13:19:12.150 INFO:teuthology.orchestra.run.vm07.stdout: Package Arch Version Repository Size 2026-03-10T13:19:12.150 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-10T13:19:12.150 INFO:teuthology.orchestra.run.vm07.stdout:Removing: 2026-03-10T13:19:12.150 INFO:teuthology.orchestra.run.vm07.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 775 k 2026-03-10T13:19:12.150 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:19:12.150 INFO:teuthology.orchestra.run.vm07.stdout:Transaction Summary 2026-03-10T13:19:12.150 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-10T13:19:12.150 INFO:teuthology.orchestra.run.vm07.stdout:Remove 1 Package 2026-03-10T13:19:12.150 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:19:12.150 INFO:teuthology.orchestra.run.vm07.stdout:Freed space: 775 k 2026-03-10T13:19:12.150 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction check 2026-03-10T13:19:12.152 INFO:teuthology.orchestra.run.vm07.stdout:Transaction check succeeded. 2026-03-10T13:19:12.152 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction test 2026-03-10T13:19:12.153 INFO:teuthology.orchestra.run.vm07.stdout:Transaction test succeeded. 2026-03-10T13:19:12.153 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction 2026-03-10T13:19:12.171 INFO:teuthology.orchestra.run.vm07.stdout: Preparing : 1/1 2026-03-10T13:19:12.171 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T13:19:12.290 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T13:19:12.346 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T13:19:12.346 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:19:12.346 INFO:teuthology.orchestra.run.vm07.stdout:Removed: 2026-03-10T13:19:12.346 INFO:teuthology.orchestra.run.vm07.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:19:12.346 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:19:12.346 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-10T13:19:12.558 INFO:teuthology.orchestra.run.vm07.stdout:No match for argument: ceph-immutable-object-cache 2026-03-10T13:19:12.558 INFO:teuthology.orchestra.run.vm07.stderr:No packages marked for removal. 2026-03-10T13:19:12.561 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-10T13:19:12.562 INFO:teuthology.orchestra.run.vm07.stdout:Nothing to do. 2026-03-10T13:19:12.562 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-10T13:19:12.744 INFO:teuthology.orchestra.run.vm07.stdout:No match for argument: ceph-mgr 2026-03-10T13:19:12.744 INFO:teuthology.orchestra.run.vm07.stderr:No packages marked for removal. 2026-03-10T13:19:12.748 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-10T13:19:12.748 INFO:teuthology.orchestra.run.vm07.stdout:Nothing to do. 2026-03-10T13:19:12.748 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-10T13:19:12.931 INFO:teuthology.orchestra.run.vm07.stdout:No match for argument: ceph-mgr-dashboard 2026-03-10T13:19:12.931 INFO:teuthology.orchestra.run.vm07.stderr:No packages marked for removal. 2026-03-10T13:19:12.935 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-10T13:19:12.935 INFO:teuthology.orchestra.run.vm07.stdout:Nothing to do. 2026-03-10T13:19:12.935 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-10T13:19:13.122 INFO:teuthology.orchestra.run.vm07.stdout:No match for argument: ceph-mgr-diskprediction-local 2026-03-10T13:19:13.122 INFO:teuthology.orchestra.run.vm07.stderr:No packages marked for removal. 2026-03-10T13:19:13.125 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-10T13:19:13.126 INFO:teuthology.orchestra.run.vm07.stdout:Nothing to do. 2026-03-10T13:19:13.126 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-10T13:19:13.299 INFO:teuthology.orchestra.run.vm07.stdout:No match for argument: ceph-mgr-rook 2026-03-10T13:19:13.299 INFO:teuthology.orchestra.run.vm07.stderr:No packages marked for removal. 2026-03-10T13:19:13.303 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-10T13:19:13.304 INFO:teuthology.orchestra.run.vm07.stdout:Nothing to do. 2026-03-10T13:19:13.304 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-10T13:19:13.478 INFO:teuthology.orchestra.run.vm07.stdout:No match for argument: ceph-mgr-cephadm 2026-03-10T13:19:13.478 INFO:teuthology.orchestra.run.vm07.stderr:No packages marked for removal. 2026-03-10T13:19:13.482 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-10T13:19:13.483 INFO:teuthology.orchestra.run.vm07.stdout:Nothing to do. 2026-03-10T13:19:13.483 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-10T13:19:13.682 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-10T13:19:13.682 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-10T13:19:13.683 INFO:teuthology.orchestra.run.vm07.stdout: Package Arch Version Repository Size 2026-03-10T13:19:13.683 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-10T13:19:13.683 INFO:teuthology.orchestra.run.vm07.stdout:Removing: 2026-03-10T13:19:13.683 INFO:teuthology.orchestra.run.vm07.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.6 M 2026-03-10T13:19:13.683 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:19:13.683 INFO:teuthology.orchestra.run.vm07.stdout:Transaction Summary 2026-03-10T13:19:13.683 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-10T13:19:13.683 INFO:teuthology.orchestra.run.vm07.stdout:Remove 1 Package 2026-03-10T13:19:13.683 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:19:13.683 INFO:teuthology.orchestra.run.vm07.stdout:Freed space: 3.6 M 2026-03-10T13:19:13.683 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction check 2026-03-10T13:19:13.685 INFO:teuthology.orchestra.run.vm07.stdout:Transaction check succeeded. 2026-03-10T13:19:13.685 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction test 2026-03-10T13:19:13.695 INFO:teuthology.orchestra.run.vm07.stdout:Transaction test succeeded. 2026-03-10T13:19:13.695 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction 2026-03-10T13:19:13.721 INFO:teuthology.orchestra.run.vm07.stdout: Preparing : 1/1 2026-03-10T13:19:13.736 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T13:19:13.801 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T13:19:13.844 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T13:19:13.844 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:19:13.844 INFO:teuthology.orchestra.run.vm07.stdout:Removed: 2026-03-10T13:19:13.844 INFO:teuthology.orchestra.run.vm07.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:19:13.844 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:19:13.844 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-10T13:19:14.045 INFO:teuthology.orchestra.run.vm07.stdout:No match for argument: ceph-volume 2026-03-10T13:19:14.045 INFO:teuthology.orchestra.run.vm07.stderr:No packages marked for removal. 2026-03-10T13:19:14.049 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-10T13:19:14.050 INFO:teuthology.orchestra.run.vm07.stdout:Nothing to do. 2026-03-10T13:19:14.050 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-10T13:19:14.257 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-10T13:19:14.257 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-10T13:19:14.258 INFO:teuthology.orchestra.run.vm07.stdout: Package Arch Version Repo Size 2026-03-10T13:19:14.258 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-10T13:19:14.258 INFO:teuthology.orchestra.run.vm07.stdout:Removing: 2026-03-10T13:19:14.258 INFO:teuthology.orchestra.run.vm07.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 456 k 2026-03-10T13:19:14.258 INFO:teuthology.orchestra.run.vm07.stdout:Removing dependent packages: 2026-03-10T13:19:14.258 INFO:teuthology.orchestra.run.vm07.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 153 k 2026-03-10T13:19:14.258 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:19:14.258 INFO:teuthology.orchestra.run.vm07.stdout:Transaction Summary 2026-03-10T13:19:14.258 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-10T13:19:14.258 INFO:teuthology.orchestra.run.vm07.stdout:Remove 2 Packages 2026-03-10T13:19:14.258 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:19:14.258 INFO:teuthology.orchestra.run.vm07.stdout:Freed space: 610 k 2026-03-10T13:19:14.258 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction check 2026-03-10T13:19:14.260 INFO:teuthology.orchestra.run.vm07.stdout:Transaction check succeeded. 2026-03-10T13:19:14.260 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction test 2026-03-10T13:19:14.272 INFO:teuthology.orchestra.run.vm07.stdout:Transaction test succeeded. 2026-03-10T13:19:14.272 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction 2026-03-10T13:19:14.299 INFO:teuthology.orchestra.run.vm07.stdout: Preparing : 1/1 2026-03-10T13:19:14.302 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T13:19:14.315 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T13:19:14.387 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T13:19:14.387 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T13:19:14.438 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T13:19:14.438 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:19:14.438 INFO:teuthology.orchestra.run.vm07.stdout:Removed: 2026-03-10T13:19:14.438 INFO:teuthology.orchestra.run.vm07.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:19:14.438 INFO:teuthology.orchestra.run.vm07.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:19:14.438 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:19:14.438 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-10T13:19:14.639 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-10T13:19:14.640 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-10T13:19:14.640 INFO:teuthology.orchestra.run.vm07.stdout: Package Arch Version Repo Size 2026-03-10T13:19:14.640 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-10T13:19:14.640 INFO:teuthology.orchestra.run.vm07.stdout:Removing: 2026-03-10T13:19:14.640 INFO:teuthology.orchestra.run.vm07.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.0 M 2026-03-10T13:19:14.640 INFO:teuthology.orchestra.run.vm07.stdout:Removing dependent packages: 2026-03-10T13:19:14.640 INFO:teuthology.orchestra.run.vm07.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 514 k 2026-03-10T13:19:14.640 INFO:teuthology.orchestra.run.vm07.stdout:Removing unused dependencies: 2026-03-10T13:19:14.640 INFO:teuthology.orchestra.run.vm07.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 187 k 2026-03-10T13:19:14.640 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:19:14.640 INFO:teuthology.orchestra.run.vm07.stdout:Transaction Summary 2026-03-10T13:19:14.640 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-10T13:19:14.640 INFO:teuthology.orchestra.run.vm07.stdout:Remove 3 Packages 2026-03-10T13:19:14.640 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:19:14.640 INFO:teuthology.orchestra.run.vm07.stdout:Freed space: 3.7 M 2026-03-10T13:19:14.640 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction check 2026-03-10T13:19:14.642 INFO:teuthology.orchestra.run.vm07.stdout:Transaction check succeeded. 2026-03-10T13:19:14.642 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction test 2026-03-10T13:19:14.660 INFO:teuthology.orchestra.run.vm07.stdout:Transaction test succeeded. 2026-03-10T13:19:14.661 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction 2026-03-10T13:19:14.696 INFO:teuthology.orchestra.run.vm07.stdout: Preparing : 1/1 2026-03-10T13:19:14.698 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-10T13:19:14.699 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-10T13:19:14.700 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T13:19:14.770 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T13:19:14.770 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-10T13:19:14.770 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-10T13:19:14.814 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T13:19:14.814 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:19:14.814 INFO:teuthology.orchestra.run.vm07.stdout:Removed: 2026-03-10T13:19:14.814 INFO:teuthology.orchestra.run.vm07.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:19:14.814 INFO:teuthology.orchestra.run.vm07.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:19:14.814 INFO:teuthology.orchestra.run.vm07.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:19:14.814 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:19:14.814 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-10T13:19:15.034 INFO:teuthology.orchestra.run.vm07.stdout:No match for argument: libcephfs-devel 2026-03-10T13:19:15.034 INFO:teuthology.orchestra.run.vm07.stderr:No packages marked for removal. 2026-03-10T13:19:15.038 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-10T13:19:15.039 INFO:teuthology.orchestra.run.vm07.stdout:Nothing to do. 2026-03-10T13:19:15.039 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-10T13:19:15.241 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-10T13:19:15.243 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-10T13:19:15.243 INFO:teuthology.orchestra.run.vm07.stdout: Package Arch Version Repository Size 2026-03-10T13:19:15.243 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-10T13:19:15.243 INFO:teuthology.orchestra.run.vm07.stdout:Removing: 2026-03-10T13:19:15.243 INFO:teuthology.orchestra.run.vm07.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 12 M 2026-03-10T13:19:15.243 INFO:teuthology.orchestra.run.vm07.stdout:Removing dependent packages: 2026-03-10T13:19:15.243 INFO:teuthology.orchestra.run.vm07.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-10T13:19:15.243 INFO:teuthology.orchestra.run.vm07.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-10T13:19:15.243 INFO:teuthology.orchestra.run.vm07.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 265 k 2026-03-10T13:19:15.243 INFO:teuthology.orchestra.run.vm07.stdout: qemu-kvm-block-rbd x86_64 17:10.1.0-15.el9 @appstream 37 k 2026-03-10T13:19:15.243 INFO:teuthology.orchestra.run.vm07.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 227 k 2026-03-10T13:19:15.243 INFO:teuthology.orchestra.run.vm07.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 490 k 2026-03-10T13:19:15.243 INFO:teuthology.orchestra.run.vm07.stdout:Removing unused dependencies: 2026-03-10T13:19:15.243 INFO:teuthology.orchestra.run.vm07.stdout: boost-program-options x86_64 1.75.0-13.el9 @appstream 276 k 2026-03-10T13:19:15.243 INFO:teuthology.orchestra.run.vm07.stdout: libarrow x86_64 9.0.0-15.el9 @epel 18 M 2026-03-10T13:19:15.243 INFO:teuthology.orchestra.run.vm07.stdout: libarrow-doc noarch 9.0.0-15.el9 @epel 122 k 2026-03-10T13:19:15.243 INFO:teuthology.orchestra.run.vm07.stdout: libnbd x86_64 1.20.3-4.el9 @appstream 453 k 2026-03-10T13:19:15.243 INFO:teuthology.orchestra.run.vm07.stdout: libpmemobj x86_64 1.12.1-1.el9 @appstream 383 k 2026-03-10T13:19:15.243 INFO:teuthology.orchestra.run.vm07.stdout: librabbitmq x86_64 0.11.0-7.el9 @appstream 102 k 2026-03-10T13:19:15.243 INFO:teuthology.orchestra.run.vm07.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-10T13:19:15.243 INFO:teuthology.orchestra.run.vm07.stdout: librdkafka x86_64 1.6.1-102.el9 @appstream 2.0 M 2026-03-10T13:19:15.243 INFO:teuthology.orchestra.run.vm07.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 19 M 2026-03-10T13:19:15.243 INFO:teuthology.orchestra.run.vm07.stdout: lttng-ust x86_64 2.12.0-6.el9 @appstream 1.0 M 2026-03-10T13:19:15.243 INFO:teuthology.orchestra.run.vm07.stdout: parquet-libs x86_64 9.0.0-15.el9 @epel 2.8 M 2026-03-10T13:19:15.243 INFO:teuthology.orchestra.run.vm07.stdout: re2 x86_64 1:20211101-20.el9 @epel 472 k 2026-03-10T13:19:15.243 INFO:teuthology.orchestra.run.vm07.stdout: thrift x86_64 0.15.0-4.el9 @epel 4.8 M 2026-03-10T13:19:15.243 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:19:15.243 INFO:teuthology.orchestra.run.vm07.stdout:Transaction Summary 2026-03-10T13:19:15.243 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-10T13:19:15.243 INFO:teuthology.orchestra.run.vm07.stdout:Remove 20 Packages 2026-03-10T13:19:15.243 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:19:15.244 INFO:teuthology.orchestra.run.vm07.stdout:Freed space: 79 M 2026-03-10T13:19:15.244 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction check 2026-03-10T13:19:15.247 INFO:teuthology.orchestra.run.vm07.stdout:Transaction check succeeded. 2026-03-10T13:19:15.247 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction test 2026-03-10T13:19:15.272 INFO:teuthology.orchestra.run.vm07.stdout:Transaction test succeeded. 2026-03-10T13:19:15.272 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction 2026-03-10T13:19:15.318 INFO:teuthology.orchestra.run.vm07.stdout: Preparing : 1/1 2026-03-10T13:19:15.321 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 1/20 2026-03-10T13:19:15.323 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2/20 2026-03-10T13:19:15.325 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 3/20 2026-03-10T13:19:15.325 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-10T13:19:15.342 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-10T13:19:15.345 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : parquet-libs-9.0.0-15.el9.x86_64 5/20 2026-03-10T13:19:15.348 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 6/20 2026-03-10T13:19:15.350 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-10T13:19:15.353 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 8/20 2026-03-10T13:19:15.355 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : libarrow-doc-9.0.0-15.el9.noarch 9/20 2026-03-10T13:19:15.355 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T13:19:15.367 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T13:19:15.367 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-10T13:19:15.367 INFO:teuthology.orchestra.run.vm07.stdout:warning: file /etc/ceph: remove failed: No such file or directory 2026-03-10T13:19:15.367 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:19:15.377 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-10T13:19:15.379 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : libarrow-9.0.0-15.el9.x86_64 12/20 2026-03-10T13:19:15.383 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : re2-1:20211101-20.el9.x86_64 13/20 2026-03-10T13:19:15.386 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : lttng-ust-2.12.0-6.el9.x86_64 14/20 2026-03-10T13:19:15.388 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : thrift-0.15.0-4.el9.x86_64 15/20 2026-03-10T13:19:15.391 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : libnbd-1.20.3-4.el9.x86_64 16/20 2026-03-10T13:19:15.393 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : libpmemobj-1.12.1-1.el9.x86_64 17/20 2026-03-10T13:19:15.395 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : boost-program-options-1.75.0-13.el9.x86_64 18/20 2026-03-10T13:19:15.397 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : librabbitmq-0.11.0-7.el9.x86_64 19/20 2026-03-10T13:19:15.411 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-10T13:19:15.489 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-10T13:19:15.489 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 1/20 2026-03-10T13:19:15.489 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 2/20 2026-03-10T13:19:15.489 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 3/20 2026-03-10T13:19:15.489 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 4/20 2026-03-10T13:19:15.489 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 5/20 2026-03-10T13:19:15.489 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 6/20 2026-03-10T13:19:15.489 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-10T13:19:15.489 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 8/20 2026-03-10T13:19:15.489 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 9/20 2026-03-10T13:19:15.489 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T13:19:15.489 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 11/20 2026-03-10T13:19:15.489 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 12/20 2026-03-10T13:19:15.489 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 13/20 2026-03-10T13:19:15.489 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 14/20 2026-03-10T13:19:15.489 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 15/20 2026-03-10T13:19:15.489 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 16/20 2026-03-10T13:19:15.489 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 17/20 2026-03-10T13:19:15.489 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 18/20 2026-03-10T13:19:15.489 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : re2-1:20211101-20.el9.x86_64 19/20 2026-03-10T13:19:15.543 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 20/20 2026-03-10T13:19:15.543 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:19:15.543 INFO:teuthology.orchestra.run.vm07.stdout:Removed: 2026-03-10T13:19:15.543 INFO:teuthology.orchestra.run.vm07.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-10T13:19:15.543 INFO:teuthology.orchestra.run.vm07.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-10T13:19:15.543 INFO:teuthology.orchestra.run.vm07.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-10T13:19:15.543 INFO:teuthology.orchestra.run.vm07.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-10T13:19:15.543 INFO:teuthology.orchestra.run.vm07.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-10T13:19:15.543 INFO:teuthology.orchestra.run.vm07.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-10T13:19:15.543 INFO:teuthology.orchestra.run.vm07.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:19:15.543 INFO:teuthology.orchestra.run.vm07.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:19:15.543 INFO:teuthology.orchestra.run.vm07.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-10T13:19:15.543 INFO:teuthology.orchestra.run.vm07.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:19:15.543 INFO:teuthology.orchestra.run.vm07.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-10T13:19:15.543 INFO:teuthology.orchestra.run.vm07.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-10T13:19:15.543 INFO:teuthology.orchestra.run.vm07.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:19:15.543 INFO:teuthology.orchestra.run.vm07.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:19:15.543 INFO:teuthology.orchestra.run.vm07.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:19:15.543 INFO:teuthology.orchestra.run.vm07.stdout: qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 2026-03-10T13:19:15.543 INFO:teuthology.orchestra.run.vm07.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:19:15.543 INFO:teuthology.orchestra.run.vm07.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:19:15.543 INFO:teuthology.orchestra.run.vm07.stdout: re2-1:20211101-20.el9.x86_64 2026-03-10T13:19:15.543 INFO:teuthology.orchestra.run.vm07.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-10T13:19:15.543 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:19:15.543 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-10T13:19:15.789 INFO:teuthology.orchestra.run.vm07.stdout:No match for argument: librbd1 2026-03-10T13:19:15.789 INFO:teuthology.orchestra.run.vm07.stderr:No packages marked for removal. 2026-03-10T13:19:15.791 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-10T13:19:15.792 INFO:teuthology.orchestra.run.vm07.stdout:Nothing to do. 2026-03-10T13:19:15.792 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-10T13:19:16.005 INFO:teuthology.orchestra.run.vm07.stdout:No match for argument: python3-rados 2026-03-10T13:19:16.005 INFO:teuthology.orchestra.run.vm07.stderr:No packages marked for removal. 2026-03-10T13:19:16.008 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-10T13:19:16.009 INFO:teuthology.orchestra.run.vm07.stdout:Nothing to do. 2026-03-10T13:19:16.009 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-10T13:19:16.200 INFO:teuthology.orchestra.run.vm07.stdout:No match for argument: python3-rgw 2026-03-10T13:19:16.200 INFO:teuthology.orchestra.run.vm07.stderr:No packages marked for removal. 2026-03-10T13:19:16.203 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-10T13:19:16.203 INFO:teuthology.orchestra.run.vm07.stdout:Nothing to do. 2026-03-10T13:19:16.204 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-10T13:19:16.399 INFO:teuthology.orchestra.run.vm07.stdout:No match for argument: python3-cephfs 2026-03-10T13:19:16.399 INFO:teuthology.orchestra.run.vm07.stderr:No packages marked for removal. 2026-03-10T13:19:16.401 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-10T13:19:16.402 INFO:teuthology.orchestra.run.vm07.stdout:Nothing to do. 2026-03-10T13:19:16.402 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-10T13:19:16.607 INFO:teuthology.orchestra.run.vm07.stdout:No match for argument: python3-rbd 2026-03-10T13:19:16.607 INFO:teuthology.orchestra.run.vm07.stderr:No packages marked for removal. 2026-03-10T13:19:16.610 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-10T13:19:16.611 INFO:teuthology.orchestra.run.vm07.stdout:Nothing to do. 2026-03-10T13:19:16.611 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-10T13:19:16.803 INFO:teuthology.orchestra.run.vm07.stdout:No match for argument: rbd-fuse 2026-03-10T13:19:16.803 INFO:teuthology.orchestra.run.vm07.stderr:No packages marked for removal. 2026-03-10T13:19:16.806 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-10T13:19:16.806 INFO:teuthology.orchestra.run.vm07.stdout:Nothing to do. 2026-03-10T13:19:16.806 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-10T13:19:17.075 INFO:teuthology.orchestra.run.vm07.stdout:No match for argument: rbd-mirror 2026-03-10T13:19:17.075 INFO:teuthology.orchestra.run.vm07.stderr:No packages marked for removal. 2026-03-10T13:19:17.077 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-10T13:19:17.077 INFO:teuthology.orchestra.run.vm07.stdout:Nothing to do. 2026-03-10T13:19:17.078 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-10T13:19:17.331 INFO:teuthology.orchestra.run.vm07.stdout:No match for argument: rbd-nbd 2026-03-10T13:19:17.332 INFO:teuthology.orchestra.run.vm07.stderr:No packages marked for removal. 2026-03-10T13:19:17.334 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-10T13:19:17.334 INFO:teuthology.orchestra.run.vm07.stdout:Nothing to do. 2026-03-10T13:19:17.334 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-10T13:19:17.368 DEBUG:teuthology.orchestra.run.vm07:> sudo yum clean all 2026-03-10T13:19:17.497 INFO:teuthology.orchestra.run.vm07.stdout:56 files removed 2026-03-10T13:19:17.516 DEBUG:teuthology.orchestra.run.vm07:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-10T13:19:17.544 DEBUG:teuthology.orchestra.run.vm07:> sudo yum clean expire-cache 2026-03-10T13:19:17.712 INFO:teuthology.orchestra.run.vm07.stdout:Cache was expired 2026-03-10T13:19:17.712 INFO:teuthology.orchestra.run.vm07.stdout:0 files removed 2026-03-10T13:19:17.739 DEBUG:teuthology.parallel:result is None 2026-03-10T13:19:17.739 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm07.local 2026-03-10T13:19:17.740 DEBUG:teuthology.orchestra.run.vm07:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-10T13:19:17.770 DEBUG:teuthology.orchestra.run.vm07:> sudo mv -f /etc/yum/pluginconf.d/priorities.conf.orig /etc/yum/pluginconf.d/priorities.conf 2026-03-10T13:19:17.843 DEBUG:teuthology.parallel:result is None 2026-03-10T13:19:17.843 DEBUG:teuthology.run_tasks:Unwinding manager clock 2026-03-10T13:19:17.845 INFO:teuthology.task.clock:Checking final clock skew... 2026-03-10T13:19:17.845 DEBUG:teuthology.orchestra.run.vm07:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T13:19:17.902 INFO:teuthology.orchestra.run.vm07.stderr:bash: line 1: ntpq: command not found 2026-03-10T13:19:17.956 INFO:teuthology.orchestra.run.vm07.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T13:19:17.956 INFO:teuthology.orchestra.run.vm07.stdout:=============================================================================== 2026-03-10T13:19:17.956 INFO:teuthology.orchestra.run.vm07.stdout:^* time.cloudflare.com 3 6 377 41 -510us[ -492us] +/- 15ms 2026-03-10T13:19:17.956 INFO:teuthology.orchestra.run.vm07.stdout:^+ gromit.nocabal.de 2 6 377 40 +539us[ +539us] +/- 45ms 2026-03-10T13:19:17.956 INFO:teuthology.orchestra.run.vm07.stdout:^+ static.241.200.132.142.c> 2 6 377 41 +429us[ +429us] +/- 17ms 2026-03-10T13:19:17.956 INFO:teuthology.orchestra.run.vm07.stdout:^- server1b.meinberg.de 2 6 377 22 +260us[ +260us] +/- 34ms 2026-03-10T13:19:17.957 DEBUG:teuthology.run_tasks:Unwinding manager ansible.cephlab 2026-03-10T13:19:17.971 INFO:teuthology.task.ansible:Skipping ansible cleanup... 2026-03-10T13:19:17.972 DEBUG:teuthology.run_tasks:Unwinding manager selinux 2026-03-10T13:19:17.976 DEBUG:teuthology.run_tasks:Unwinding manager pcp 2026-03-10T13:19:17.978 DEBUG:teuthology.run_tasks:Unwinding manager internal.timer 2026-03-10T13:19:17.980 INFO:teuthology.task.internal:Duration was 608.473115 seconds 2026-03-10T13:19:17.981 DEBUG:teuthology.run_tasks:Unwinding manager internal.syslog 2026-03-10T13:19:17.983 INFO:teuthology.task.internal.syslog:Shutting down syslog monitoring... 2026-03-10T13:19:17.983 DEBUG:teuthology.orchestra.run.vm07:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-10T13:19:18.046 INFO:teuthology.orchestra.run.vm07.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T13:19:18.373 INFO:teuthology.task.internal.syslog:Checking logs for errors... 2026-03-10T13:19:18.373 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm07.local 2026-03-10T13:19:18.373 DEBUG:teuthology.orchestra.run.vm07:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-10T13:19:18.399 INFO:teuthology.task.internal.syslog:Gathering journactl... 2026-03-10T13:19:18.399 DEBUG:teuthology.orchestra.run.vm07:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T13:19:19.102 INFO:teuthology.task.internal.syslog:Compressing syslogs... 2026-03-10T13:19:19.102 DEBUG:teuthology.orchestra.run.vm07:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T13:19:19.127 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T13:19:19.128 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T13:19:19.128 INFO:teuthology.orchestra.run.vm07.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T13:19:19.128 INFO:teuthology.orchestra.run.vm07.stderr: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-10T13:19:19.128 INFO:teuthology.orchestra.run.vm07.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-10T13:19:19.310 INFO:teuthology.orchestra.run.vm07.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 97.2% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-10T13:19:19.312 DEBUG:teuthology.run_tasks:Unwinding manager internal.sudo 2026-03-10T13:19:19.315 INFO:teuthology.task.internal:Restoring /etc/sudoers... 2026-03-10T13:19:19.315 DEBUG:teuthology.orchestra.run.vm07:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-10T13:19:19.384 DEBUG:teuthology.run_tasks:Unwinding manager internal.coredump 2026-03-10T13:19:19.388 DEBUG:teuthology.orchestra.run.vm07:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-10T13:19:19.458 INFO:teuthology.orchestra.run.vm07.stdout:kernel.core_pattern = core 2026-03-10T13:19:19.473 DEBUG:teuthology.orchestra.run.vm07:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-10T13:19:19.532 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T13:19:19.532 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive 2026-03-10T13:19:19.535 INFO:teuthology.task.internal:Transferring archived files... 2026-03-10T13:19:19.535 DEBUG:teuthology.misc:Transferring archived files from vm07:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1043/remote/vm07 2026-03-10T13:19:19.535 DEBUG:teuthology.orchestra.run.vm07:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-10T13:19:19.605 INFO:teuthology.task.internal:Removing archive directory... 2026-03-10T13:19:19.606 DEBUG:teuthology.orchestra.run.vm07:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-10T13:19:19.662 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive_upload 2026-03-10T13:19:19.665 INFO:teuthology.task.internal:Not uploading archives. 2026-03-10T13:19:19.665 DEBUG:teuthology.run_tasks:Unwinding manager internal.base 2026-03-10T13:19:19.667 INFO:teuthology.task.internal:Tidying up after the test... 2026-03-10T13:19:19.667 DEBUG:teuthology.orchestra.run.vm07:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-10T13:19:19.720 INFO:teuthology.orchestra.run.vm07.stdout: 8532144 0 drwxr-xr-x 3 ubuntu ubuntu 19 Mar 10 13:19 /home/ubuntu/cephtest 2026-03-10T13:19:19.720 INFO:teuthology.orchestra.run.vm07.stdout: 16831705 0 drwxr-xr-x 3 ubuntu ubuntu 22 Mar 10 13:14 /home/ubuntu/cephtest/mnt.0 2026-03-10T13:19:19.720 INFO:teuthology.orchestra.run.vm07.stdout: 30073411 0 drwxr-xr-x 3 ubuntu ubuntu 17 Mar 10 13:14 /home/ubuntu/cephtest/mnt.0/client.0 2026-03-10T13:19:19.720 INFO:teuthology.orchestra.run.vm07.stdout: 1625601 0 drwxr-xr-x 2 ubuntu ubuntu 6 Mar 10 13:14 /home/ubuntu/cephtest/mnt.0/client.0/tmp 2026-03-10T13:19:19.721 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T13:19:19.721 INFO:teuthology.orchestra.run.vm07.stderr:rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty 2026-03-10T13:19:19.721 ERROR:teuthology.run_tasks:Manager failed: internal.base Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/teuthology/teuthology/task/internal/__init__.py", line 53, in base run.wait( File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 485, in wait proc.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm07 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest' 2026-03-10T13:19:19.722 DEBUG:teuthology.run_tasks:Unwinding manager console_log 2026-03-10T13:19:19.724 DEBUG:teuthology.run_tasks:Exception was not quenched, exiting: CommandFailedError: Command failed on vm07 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest' 2026-03-10T13:19:19.726 INFO:teuthology.run:Summary data: description: orch/cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_iscsi_container/{centos_9.stream test_iscsi_container}} duration: 608.4731149673462 failure_reason: 'Command failed (workunit test cephadm/test_iscsi_pids_limit.sh) on vm07 with status 125: ''mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_iscsi_pids_limit.sh''' flavor: default owner: kyr sentry_event: null status: fail success: false 2026-03-10T13:19:19.726 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-10T13:19:19.751 INFO:teuthology.run:FAIL