2026-03-09T13:28:06.057 INFO:root:teuthology version: 1.2.4.dev6+g1c580df7a 2026-03-09T13:28:06.060 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-09T13:28:06.082 INFO:teuthology.run:Config: archive_path: /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/492 branch: squid description: orch/cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_iscsi_container/{centos_9.stream test_iscsi_container}} email: null first_in_suite: false flavor: default job_id: '492' last_in_suite: false machine_type: vps name: kyr-2026-03-09_11:23:05-orch-squid-none-default-vps no_nested_subset: false os_type: centos os_version: 9.stream overrides: admin_socket: branch: squid ansible.cephlab: branch: main skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: timezone: UTC ceph: conf: global: mon election default strategy: 3 mgr: debug mgr: 20 debug ms: 1 mgr/cephadm/use_agent: true mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: debug ms: 1 debug osd: 20 osd mclock iops capacity threshold hdd: 49000 flavor: default log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) - CEPHADM_FAILED_DAEMON log-only-match: - CEPHADM_ sha1: e911bdebe5c8faa3800735d1568fcdca65db60df ceph-deploy: conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: {} install: ceph: flavor: default sha1: e911bdebe5c8faa3800735d1568fcdca65db60df extra_system_packages: deb: - python3-xmltodict - python3-jmespath rpm: - bzip2 - perl-Test-Harness - python3-xmltodict - python3-jmespath selinux: allowlist: - scontext=system_u:system_r:logrotate_t:s0 - scontext=system_u:system_r:getty_t:s0 - scontext=system_u:system_r:logrotate_t:s0 - scontext=system_u:system_r:getty_t:s0 workunit: branch: tt-squid sha1: 569c3e99c9b32a51b4eaf08731c728f4513ed589 owner: kyr priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - host.a - osd.0 - osd.1 - osd.2 - mon.a - mgr.a - client.0 seed: 3443 sha1: e911bdebe5c8faa3800735d1568fcdca65db60df sleep_before_teardown: 0 subset: 1/64 suite: orch suite_branch: tt-squid suite_path: /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa suite_relpath: qa suite_repo: https://github.com/kshtsk/ceph.git suite_sha1: 569c3e99c9b32a51b4eaf08731c728f4513ed589 targets: vm04.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBF/Bg9QZQRl5axFC2iHKMOaskBK1AD6nvtJFo42sM8El/pq2Kz9kKris7bDFAMYfdr97g4dh2P2Qv5fhBBCvfWY= tasks: - pexec: all: - sudo dnf remove nvme-cli -y - sudo dnf install nvmetcli nvme-cli -y - install: null - cephadm: null - cephadm.shell: host.a: - ceph osd pool create foo - rbd pool init foo - ceph orch apply iscsi foo u p - workunit: clients: client.0: - cephadm/test_iscsi_pids_limit.sh - cephadm/test_iscsi_etc_hosts.sh - cephadm/test_iscsi_setup.sh teuthology: fragments_dropped: [] meta: {} postmerge: [] teuthology_branch: clyso-debian-13 teuthology_repo: https://github.com/clyso/teuthology teuthology_sha1: 1c580df7a9c7c2aadc272da296344fd99f27c444 timestamp: 2026-03-09_11:23:05 tube: vps user: kyr verbose: false worker_log: /home/teuthos/.teuthology/dispatcher/dispatcher.vps.611473 2026-03-09T13:28:06.082 INFO:teuthology.run:suite_path is set to /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa; will attempt to use it 2026-03-09T13:28:06.082 INFO:teuthology.run:Found tasks at /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks 2026-03-09T13:28:06.082 INFO:teuthology.run_tasks:Running task internal.check_packages... 2026-03-09T13:28:06.083 INFO:teuthology.task.internal:Checking packages... 2026-03-09T13:28:06.083 INFO:teuthology.task.internal:Checking packages for os_type 'centos', flavor 'default' and ceph hash 'e911bdebe5c8faa3800735d1568fcdca65db60df' 2026-03-09T13:28:06.083 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-09T13:28:06.083 INFO:teuthology.packaging:ref: None 2026-03-09T13:28:06.083 INFO:teuthology.packaging:tag: None 2026-03-09T13:28:06.083 INFO:teuthology.packaging:branch: squid 2026-03-09T13:28:06.083 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T13:28:06.083 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&ref=squid 2026-03-09T13:28:06.877 INFO:teuthology.task.internal:Found packages for ceph version 19.2.3-678.ge911bdeb 2026-03-09T13:28:06.878 INFO:teuthology.run_tasks:Running task internal.buildpackages_prep... 2026-03-09T13:28:06.879 INFO:teuthology.task.internal:no buildpackages task found 2026-03-09T13:28:06.879 INFO:teuthology.run_tasks:Running task internal.save_config... 2026-03-09T13:28:06.879 INFO:teuthology.task.internal:Saving configuration 2026-03-09T13:28:06.883 INFO:teuthology.run_tasks:Running task internal.check_lock... 2026-03-09T13:28:06.884 INFO:teuthology.task.internal.check_lock:Checking locks... 2026-03-09T13:28:06.895 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm04.local', 'description': '/archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/492', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'centos', 'os_version': '9.stream', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-09 13:27:29.588003', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:04', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBF/Bg9QZQRl5axFC2iHKMOaskBK1AD6nvtJFo42sM8El/pq2Kz9kKris7bDFAMYfdr97g4dh2P2Qv5fhBBCvfWY='} 2026-03-09T13:28:06.895 INFO:teuthology.run_tasks:Running task internal.add_remotes... 2026-03-09T13:28:06.896 INFO:teuthology.task.internal:roles: ubuntu@vm04.local - ['host.a', 'osd.0', 'osd.1', 'osd.2', 'mon.a', 'mgr.a', 'client.0'] 2026-03-09T13:28:06.896 INFO:teuthology.run_tasks:Running task console_log... 2026-03-09T13:28:06.902 DEBUG:teuthology.task.console_log:vm04 does not support IPMI; excluding 2026-03-09T13:28:06.902 DEBUG:teuthology.exit:Installing handler: Handler(exiter=, func=.kill_console_loggers at 0x7f422d086170>, signals=[15]) 2026-03-09T13:28:06.902 INFO:teuthology.run_tasks:Running task internal.connect... 2026-03-09T13:28:06.903 INFO:teuthology.task.internal:Opening connections... 2026-03-09T13:28:06.903 DEBUG:teuthology.task.internal:connecting to ubuntu@vm04.local 2026-03-09T13:28:06.904 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm04.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T13:28:06.963 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-03-09T13:28:06.964 DEBUG:teuthology.orchestra.run.vm04:> uname -m 2026-03-09T13:28:07.126 INFO:teuthology.orchestra.run.vm04.stdout:x86_64 2026-03-09T13:28:07.127 DEBUG:teuthology.orchestra.run.vm04:> cat /etc/os-release 2026-03-09T13:28:07.184 INFO:teuthology.orchestra.run.vm04.stdout:NAME="CentOS Stream" 2026-03-09T13:28:07.184 INFO:teuthology.orchestra.run.vm04.stdout:VERSION="9" 2026-03-09T13:28:07.184 INFO:teuthology.orchestra.run.vm04.stdout:ID="centos" 2026-03-09T13:28:07.184 INFO:teuthology.orchestra.run.vm04.stdout:ID_LIKE="rhel fedora" 2026-03-09T13:28:07.184 INFO:teuthology.orchestra.run.vm04.stdout:VERSION_ID="9" 2026-03-09T13:28:07.184 INFO:teuthology.orchestra.run.vm04.stdout:PLATFORM_ID="platform:el9" 2026-03-09T13:28:07.184 INFO:teuthology.orchestra.run.vm04.stdout:PRETTY_NAME="CentOS Stream 9" 2026-03-09T13:28:07.184 INFO:teuthology.orchestra.run.vm04.stdout:ANSI_COLOR="0;31" 2026-03-09T13:28:07.184 INFO:teuthology.orchestra.run.vm04.stdout:LOGO="fedora-logo-icon" 2026-03-09T13:28:07.184 INFO:teuthology.orchestra.run.vm04.stdout:CPE_NAME="cpe:/o:centos:centos:9" 2026-03-09T13:28:07.184 INFO:teuthology.orchestra.run.vm04.stdout:HOME_URL="https://centos.org/" 2026-03-09T13:28:07.184 INFO:teuthology.orchestra.run.vm04.stdout:BUG_REPORT_URL="https://issues.redhat.com/" 2026-03-09T13:28:07.184 INFO:teuthology.orchestra.run.vm04.stdout:REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9" 2026-03-09T13:28:07.184 INFO:teuthology.orchestra.run.vm04.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream" 2026-03-09T13:28:07.185 INFO:teuthology.lock.ops:Updating vm04.local on lock server 2026-03-09T13:28:07.190 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-03-09T13:28:07.191 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-03-09T13:28:07.192 INFO:teuthology.task.internal:Checking for old test directory... 2026-03-09T13:28:07.192 DEBUG:teuthology.orchestra.run.vm04:> test '!' -e /home/ubuntu/cephtest 2026-03-09T13:28:07.240 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-03-09T13:28:07.241 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-03-09T13:28:07.241 DEBUG:teuthology.orchestra.run.vm04:> test -z $(ls -A /var/lib/ceph) 2026-03-09T13:28:07.298 INFO:teuthology.orchestra.run.vm04.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-09T13:28:07.298 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-03-09T13:28:07.314 DEBUG:teuthology.orchestra.run.vm04:> test -e /ceph-qa-ready 2026-03-09T13:28:07.355 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T13:28:07.546 INFO:teuthology.run_tasks:Running task internal.base... 2026-03-09T13:28:07.547 INFO:teuthology.task.internal:Creating test directory... 2026-03-09T13:28:07.547 DEBUG:teuthology.orchestra.run.vm04:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-09T13:28:07.565 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-03-09T13:28:07.567 INFO:teuthology.run_tasks:Running task internal.archive... 2026-03-09T13:28:07.568 INFO:teuthology.task.internal:Creating archive directory... 2026-03-09T13:28:07.568 DEBUG:teuthology.orchestra.run.vm04:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-09T13:28:07.623 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-03-09T13:28:07.625 INFO:teuthology.task.internal:Enabling coredump saving... 2026-03-09T13:28:07.625 DEBUG:teuthology.orchestra.run.vm04:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-09T13:28:07.676 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T13:28:07.676 DEBUG:teuthology.orchestra.run.vm04:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-09T13:28:07.741 INFO:teuthology.orchestra.run.vm04.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T13:28:07.750 INFO:teuthology.orchestra.run.vm04.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T13:28:07.752 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-03-09T13:28:07.753 INFO:teuthology.task.internal:Configuring sudo... 2026-03-09T13:28:07.753 DEBUG:teuthology.orchestra.run.vm04:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-09T13:28:07.821 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-03-09T13:28:07.823 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-03-09T13:28:07.823 DEBUG:teuthology.orchestra.run.vm04:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-09T13:28:07.878 DEBUG:teuthology.orchestra.run.vm04:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T13:28:07.942 DEBUG:teuthology.orchestra.run.vm04:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T13:28:08.000 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T13:28:08.000 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-09T13:28:08.057 DEBUG:teuthology.orchestra.run.vm04:> sudo service rsyslog restart 2026-03-09T13:28:08.124 INFO:teuthology.orchestra.run.vm04.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-09T13:28:08.417 INFO:teuthology.run_tasks:Running task internal.timer... 2026-03-09T13:28:08.419 INFO:teuthology.task.internal:Starting timer... 2026-03-09T13:28:08.419 INFO:teuthology.run_tasks:Running task pcp... 2026-03-09T13:28:08.421 INFO:teuthology.run_tasks:Running task selinux... 2026-03-09T13:28:08.424 DEBUG:teuthology.task:Applying overrides for task selinux: {'allowlist': ['scontext=system_u:system_r:logrotate_t:s0', 'scontext=system_u:system_r:getty_t:s0', 'scontext=system_u:system_r:logrotate_t:s0', 'scontext=system_u:system_r:getty_t:s0']} 2026-03-09T13:28:08.424 INFO:teuthology.task.selinux:Excluding vm04: VMs are not yet supported 2026-03-09T13:28:08.424 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-03-09T13:28:08.424 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-03-09T13:28:08.424 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-03-09T13:28:08.424 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-03-09T13:28:08.425 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'timezone': 'UTC'}} 2026-03-09T13:28:08.425 DEBUG:teuthology.repo_utils:Setting repo remote to https://github.com/ceph/ceph-cm-ansible.git 2026-03-09T13:28:08.427 INFO:teuthology.repo_utils:Fetching github.com_ceph_ceph-cm-ansible_main from origin 2026-03-09T13:28:09.133 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main to origin/main 2026-03-09T13:28:09.148 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-03-09T13:28:09.149 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "timezone": "UTC"}' -i /tmp/teuth_ansible_inventoryoetjufta --limit vm04.local /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-03-09T13:29:40.709 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm04.local')] 2026-03-09T13:29:40.710 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm04.local' 2026-03-09T13:29:40.710 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm04.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T13:29:40.773 DEBUG:teuthology.orchestra.run.vm04:> true 2026-03-09T13:29:40.854 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm04.local' 2026-03-09T13:29:40.854 INFO:teuthology.run_tasks:Running task clock... 2026-03-09T13:29:40.857 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-03-09T13:29:40.857 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-09T13:29:40.857 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T13:29:40.926 INFO:teuthology.orchestra.run.vm04.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-03-09T13:29:40.940 INFO:teuthology.orchestra.run.vm04.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-03-09T13:29:40.968 INFO:teuthology.orchestra.run.vm04.stderr:sudo: ntpd: command not found 2026-03-09T13:29:40.979 INFO:teuthology.orchestra.run.vm04.stdout:506 Cannot talk to daemon 2026-03-09T13:29:40.998 INFO:teuthology.orchestra.run.vm04.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-03-09T13:29:41.015 INFO:teuthology.orchestra.run.vm04.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-03-09T13:29:41.070 INFO:teuthology.orchestra.run.vm04.stderr:bash: line 1: ntpq: command not found 2026-03-09T13:29:41.072 INFO:teuthology.orchestra.run.vm04.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-09T13:29:41.072 INFO:teuthology.orchestra.run.vm04.stdout:=============================================================================== 2026-03-09T13:29:41.073 INFO:teuthology.run_tasks:Running task pexec... 2026-03-09T13:29:41.075 INFO:teuthology.task.pexec:Executing custom commands... 2026-03-09T13:29:41.075 DEBUG:teuthology.orchestra.run.vm04:> TESTDIR=/home/ubuntu/cephtest bash -s 2026-03-09T13:29:41.116 DEBUG:teuthology.task.pexec:ubuntu@vm04.local< sudo dnf remove nvme-cli -y 2026-03-09T13:29:41.116 DEBUG:teuthology.task.pexec:ubuntu@vm04.local< sudo dnf install nvmetcli nvme-cli -y 2026-03-09T13:29:41.117 INFO:teuthology.task.pexec:Running commands on host ubuntu@vm04.local 2026-03-09T13:29:41.117 INFO:teuthology.task.pexec:sudo dnf remove nvme-cli -y 2026-03-09T13:29:41.117 INFO:teuthology.task.pexec:sudo dnf install nvmetcli nvme-cli -y 2026-03-09T13:29:41.322 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: nvme-cli 2026-03-09T13:29:41.322 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T13:29:41.325 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T13:29:41.326 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T13:29:41.326 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T13:29:41.768 INFO:teuthology.orchestra.run.vm04.stdout:Last metadata expiration check: 0:00:59 ago on Mon 09 Mar 2026 01:28:42 PM UTC. 2026-03-09T13:29:41.894 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T13:29:41.894 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T13:29:41.894 INFO:teuthology.orchestra.run.vm04.stdout: Package Architecture Version Repository Size 2026-03-09T13:29:41.895 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T13:29:41.895 INFO:teuthology.orchestra.run.vm04.stdout:Installing: 2026-03-09T13:29:41.895 INFO:teuthology.orchestra.run.vm04.stdout: nvme-cli x86_64 2.16-1.el9 baseos 1.2 M 2026-03-09T13:29:41.895 INFO:teuthology.orchestra.run.vm04.stdout: nvmetcli noarch 0.8-3.el9 baseos 44 k 2026-03-09T13:29:41.895 INFO:teuthology.orchestra.run.vm04.stdout:Installing dependencies: 2026-03-09T13:29:41.895 INFO:teuthology.orchestra.run.vm04.stdout: python3-configshell noarch 1:1.1.30-1.el9 baseos 72 k 2026-03-09T13:29:41.895 INFO:teuthology.orchestra.run.vm04.stdout: python3-kmod x86_64 0.9-32.el9 baseos 84 k 2026-03-09T13:29:41.895 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyparsing noarch 2.4.7-9.el9 baseos 150 k 2026-03-09T13:29:41.895 INFO:teuthology.orchestra.run.vm04.stdout: python3-urwid x86_64 2.1.2-4.el9 baseos 837 k 2026-03-09T13:29:41.895 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:29:41.895 INFO:teuthology.orchestra.run.vm04.stdout:Transaction Summary 2026-03-09T13:29:41.895 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T13:29:41.895 INFO:teuthology.orchestra.run.vm04.stdout:Install 6 Packages 2026-03-09T13:29:41.895 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:29:41.895 INFO:teuthology.orchestra.run.vm04.stdout:Total download size: 2.3 M 2026-03-09T13:29:41.895 INFO:teuthology.orchestra.run.vm04.stdout:Installed size: 11 M 2026-03-09T13:29:41.895 INFO:teuthology.orchestra.run.vm04.stdout:Downloading Packages: 2026-03-09T13:29:42.146 INFO:teuthology.orchestra.run.vm04.stdout:(1/6): nvmetcli-0.8-3.el9.noarch.rpm 297 kB/s | 44 kB 00:00 2026-03-09T13:29:42.174 INFO:teuthology.orchestra.run.vm04.stdout:(2/6): python3-configshell-1.1.30-1.el9.noarch. 409 kB/s | 72 kB 00:00 2026-03-09T13:29:42.256 INFO:teuthology.orchestra.run.vm04.stdout:(3/6): python3-kmod-0.9-32.el9.x86_64.rpm 770 kB/s | 84 kB 00:00 2026-03-09T13:29:42.273 INFO:teuthology.orchestra.run.vm04.stdout:(4/6): python3-pyparsing-2.4.7-9.el9.noarch.rpm 1.5 MB/s | 150 kB 00:00 2026-03-09T13:29:42.288 INFO:teuthology.orchestra.run.vm04.stdout:(5/6): nvme-cli-2.16-1.el9.x86_64.rpm 4.0 MB/s | 1.2 MB 00:00 2026-03-09T13:29:42.322 INFO:teuthology.orchestra.run.vm04.stdout:(6/6): python3-urwid-2.1.2-4.el9.x86_64.rpm 13 MB/s | 837 kB 00:00 2026-03-09T13:29:42.322 INFO:teuthology.orchestra.run.vm04.stdout:-------------------------------------------------------------------------------- 2026-03-09T13:29:42.322 INFO:teuthology.orchestra.run.vm04.stdout:Total 5.4 MB/s | 2.3 MB 00:00 2026-03-09T13:29:42.396 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction check 2026-03-09T13:29:42.408 INFO:teuthology.orchestra.run.vm04.stdout:Transaction check succeeded. 2026-03-09T13:29:42.408 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction test 2026-03-09T13:29:42.472 INFO:teuthology.orchestra.run.vm04.stdout:Transaction test succeeded. 2026-03-09T13:29:42.472 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction 2026-03-09T13:29:42.645 INFO:teuthology.orchestra.run.vm04.stdout: Preparing : 1/1 2026-03-09T13:29:42.658 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-urwid-2.1.2-4.el9.x86_64 1/6 2026-03-09T13:29:42.675 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-pyparsing-2.4.7-9.el9.noarch 2/6 2026-03-09T13:29:42.684 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-configshell-1:1.1.30-1.el9.noarch 3/6 2026-03-09T13:29:42.691 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-kmod-0.9-32.el9.x86_64 4/6 2026-03-09T13:29:42.693 INFO:teuthology.orchestra.run.vm04.stdout: Installing : nvmetcli-0.8-3.el9.noarch 5/6 2026-03-09T13:29:42.881 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: nvmetcli-0.8-3.el9.noarch 5/6 2026-03-09T13:29:42.888 INFO:teuthology.orchestra.run.vm04.stdout: Installing : nvme-cli-2.16-1.el9.x86_64 6/6 2026-03-09T13:29:43.281 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: nvme-cli-2.16-1.el9.x86_64 6/6 2026-03-09T13:29:43.281 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /usr/lib/systemd/system/nvmefc-boot-connections.service. 2026-03-09T13:29:43.281 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:29:43.844 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : nvme-cli-2.16-1.el9.x86_64 1/6 2026-03-09T13:29:43.844 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : nvmetcli-0.8-3.el9.noarch 2/6 2026-03-09T13:29:43.844 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-configshell-1:1.1.30-1.el9.noarch 3/6 2026-03-09T13:29:43.844 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-kmod-0.9-32.el9.x86_64 4/6 2026-03-09T13:29:43.844 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.noarch 5/6 2026-03-09T13:29:43.937 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-urwid-2.1.2-4.el9.x86_64 6/6 2026-03-09T13:29:43.937 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:29:43.937 INFO:teuthology.orchestra.run.vm04.stdout:Installed: 2026-03-09T13:29:43.937 INFO:teuthology.orchestra.run.vm04.stdout: nvme-cli-2.16-1.el9.x86_64 nvmetcli-0.8-3.el9.noarch 2026-03-09T13:29:43.937 INFO:teuthology.orchestra.run.vm04.stdout: python3-configshell-1:1.1.30-1.el9.noarch python3-kmod-0.9-32.el9.x86_64 2026-03-09T13:29:43.937 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyparsing-2.4.7-9.el9.noarch python3-urwid-2.1.2-4.el9.x86_64 2026-03-09T13:29:43.937 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:29:43.937 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T13:29:44.000 DEBUG:teuthology.parallel:result is None 2026-03-09T13:29:44.000 INFO:teuthology.run_tasks:Running task install... 2026-03-09T13:29:44.002 DEBUG:teuthology.task.install:project ceph 2026-03-09T13:29:44.002 DEBUG:teuthology.task.install:INSTALL overrides: {'ceph': {'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'}, 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-09T13:29:44.002 DEBUG:teuthology.task.install:config {'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-09T13:29:44.002 INFO:teuthology.task.install:Using flavor: default 2026-03-09T13:29:44.005 DEBUG:teuthology.task.install:Package list is: {'deb': ['ceph', 'cephadm', 'ceph-mds', 'ceph-mgr', 'ceph-common', 'ceph-fuse', 'ceph-test', 'ceph-volume', 'radosgw', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'libcephfs2', 'libcephfs-dev', 'librados2', 'librbd1', 'rbd-fuse'], 'rpm': ['ceph-radosgw', 'ceph-test', 'ceph', 'ceph-base', 'cephadm', 'ceph-immutable-object-cache', 'ceph-mgr', 'ceph-mgr-dashboard', 'ceph-mgr-diskprediction-local', 'ceph-mgr-rook', 'ceph-mgr-cephadm', 'ceph-fuse', 'ceph-volume', 'librados-devel', 'libcephfs2', 'libcephfs-devel', 'librados2', 'librbd1', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'rbd-fuse', 'rbd-mirror', 'rbd-nbd']} 2026-03-09T13:29:44.005 INFO:teuthology.task.install:extra packages: [] 2026-03-09T13:29:44.005 DEBUG:teuthology.task.install.rpm:_update_package_list_and_install: config is {'branch': None, 'cleanup': None, 'debuginfo': None, 'downgrade_packages': [], 'exclude_packages': [], 'extra_packages': [], 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}, 'extras': None, 'enable_coprs': [], 'flavor': 'default', 'install_ceph_packages': True, 'packages': {}, 'project': 'ceph', 'repos_only': False, 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'tag': None, 'wait_for_package': False} 2026-03-09T13:29:44.005 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T13:29:44.638 INFO:teuthology.task.install.rpm:Pulling from https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/ 2026-03-09T13:29:44.638 INFO:teuthology.task.install.rpm:Package version is 19.2.3-678.ge911bdeb 2026-03-09T13:29:45.211 INFO:teuthology.packaging:Writing yum repo: [ceph] name=ceph packages for $basearch baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/$basearch enabled=1 gpgcheck=0 type=rpm-md [ceph-noarch] name=ceph noarch packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/noarch enabled=1 gpgcheck=0 type=rpm-md [ceph-source] name=ceph source packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/SRPMS enabled=1 gpgcheck=0 type=rpm-md 2026-03-09T13:29:45.211 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T13:29:45.211 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/etc/yum.repos.d/ceph.repo 2026-03-09T13:29:45.244 INFO:teuthology.task.install.rpm:Installing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd, bzip2, perl-Test-Harness, python3-xmltodict, python3-jmespath on remote rpm x86_64 2026-03-09T13:29:45.244 DEBUG:teuthology.orchestra.run.vm04:> if test -f /etc/yum.repos.d/ceph.repo ; then sudo sed -i -e ':a;N;$!ba;s/enabled=1\ngpg/enabled=1\npriority=1\ngpg/g' -e 's;ref/[a-zA-Z0-9_-]*/;sha1/e911bdebe5c8faa3800735d1568fcdca65db60df/;g' /etc/yum.repos.d/ceph.repo ; fi 2026-03-09T13:29:45.314 DEBUG:teuthology.orchestra.run.vm04:> sudo touch -a /etc/yum/pluginconf.d/priorities.conf ; test -e /etc/yum/pluginconf.d/priorities.conf.orig || sudo cp -af /etc/yum/pluginconf.d/priorities.conf /etc/yum/pluginconf.d/priorities.conf.orig 2026-03-09T13:29:45.399 DEBUG:teuthology.orchestra.run.vm04:> grep check_obsoletes /etc/yum/pluginconf.d/priorities.conf && sudo sed -i 's/check_obsoletes.*0/check_obsoletes = 1/g' /etc/yum/pluginconf.d/priorities.conf || echo 'check_obsoletes = 1' | sudo tee -a /etc/yum/pluginconf.d/priorities.conf 2026-03-09T13:29:45.421 INFO:teuthology.orchestra.run.vm04.stdout:check_obsoletes = 1 2026-03-09T13:29:45.423 DEBUG:teuthology.orchestra.run.vm04:> sudo yum clean all 2026-03-09T13:29:45.635 INFO:teuthology.orchestra.run.vm04.stdout:41 files removed 2026-03-09T13:29:45.662 DEBUG:teuthology.orchestra.run.vm04:> sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd bzip2 perl-Test-Harness python3-xmltodict python3-jmespath 2026-03-09T13:29:47.029 INFO:teuthology.orchestra.run.vm04.stdout:ceph packages for x86_64 71 kB/s | 84 kB 00:01 2026-03-09T13:29:48.017 INFO:teuthology.orchestra.run.vm04.stdout:ceph noarch packages 12 kB/s | 12 kB 00:00 2026-03-09T13:29:48.951 INFO:teuthology.orchestra.run.vm04.stdout:ceph source packages 2.1 kB/s | 1.9 kB 00:00 2026-03-09T13:29:49.645 INFO:teuthology.orchestra.run.vm04.stdout:CentOS Stream 9 - BaseOS 13 MB/s | 8.9 MB 00:00 2026-03-09T13:29:51.529 INFO:teuthology.orchestra.run.vm04.stdout:CentOS Stream 9 - AppStream 24 MB/s | 27 MB 00:01 2026-03-09T13:29:55.873 INFO:teuthology.orchestra.run.vm04.stdout:CentOS Stream 9 - CRB 5.1 MB/s | 8.0 MB 00:01 2026-03-09T13:29:57.515 INFO:teuthology.orchestra.run.vm04.stdout:CentOS Stream 9 - Extras packages 26 kB/s | 20 kB 00:00 2026-03-09T13:29:58.039 INFO:teuthology.orchestra.run.vm04.stdout:Extra Packages for Enterprise Linux 45 MB/s | 20 MB 00:00 2026-03-09T13:30:02.723 INFO:teuthology.orchestra.run.vm04.stdout:lab-extras 53 kB/s | 50 kB 00:00 2026-03-09T13:30:04.345 INFO:teuthology.orchestra.run.vm04.stdout:Package librados2-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-09T13:30:04.345 INFO:teuthology.orchestra.run.vm04.stdout:Package librbd1-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-09T13:30:04.349 INFO:teuthology.orchestra.run.vm04.stdout:Package bzip2-1.0.8-11.el9.x86_64 is already installed. 2026-03-09T13:30:04.350 INFO:teuthology.orchestra.run.vm04.stdout:Package perl-Test-Harness-1:3.42-461.el9.noarch is already installed. 2026-03-09T13:30:04.377 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T13:30:04.381 INFO:teuthology.orchestra.run.vm04.stdout:====================================================================================== 2026-03-09T13:30:04.381 INFO:teuthology.orchestra.run.vm04.stdout: Package Arch Version Repository Size 2026-03-09T13:30:04.381 INFO:teuthology.orchestra.run.vm04.stdout:====================================================================================== 2026-03-09T13:30:04.381 INFO:teuthology.orchestra.run.vm04.stdout:Installing: 2026-03-09T13:30:04.381 INFO:teuthology.orchestra.run.vm04.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 6.5 k 2026-03-09T13:30:04.381 INFO:teuthology.orchestra.run.vm04.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.5 M 2026-03-09T13:30:04.381 INFO:teuthology.orchestra.run.vm04.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.2 M 2026-03-09T13:30:04.381 INFO:teuthology.orchestra.run.vm04.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 145 k 2026-03-09T13:30:04.381 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.1 M 2026-03-09T13:30:04.381 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 150 k 2026-03-09T13:30:04.381 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 3.8 M 2026-03-09T13:30:04.381 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 7.4 M 2026-03-09T13:30:04.381 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 49 k 2026-03-09T13:30:04.381 INFO:teuthology.orchestra.run.vm04.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 11 M 2026-03-09T13:30:04.381 INFO:teuthology.orchestra.run.vm04.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 50 M 2026-03-09T13:30:04.381 INFO:teuthology.orchestra.run.vm04.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 299 k 2026-03-09T13:30:04.381 INFO:teuthology.orchestra.run.vm04.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 769 k 2026-03-09T13:30:04.381 INFO:teuthology.orchestra.run.vm04.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 34 k 2026-03-09T13:30:04.381 INFO:teuthology.orchestra.run.vm04.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.0 M 2026-03-09T13:30:04.381 INFO:teuthology.orchestra.run.vm04.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 127 k 2026-03-09T13:30:04.381 INFO:teuthology.orchestra.run.vm04.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 165 k 2026-03-09T13:30:04.381 INFO:teuthology.orchestra.run.vm04.stdout: python3-jmespath noarch 1.0.1-1.el9 appstream 48 k 2026-03-09T13:30:04.381 INFO:teuthology.orchestra.run.vm04.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 323 k 2026-03-09T13:30:04.381 INFO:teuthology.orchestra.run.vm04.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 303 k 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 100 k 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: python3-xmltodict noarch 0.12.0-15.el9 epel 22 k 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 85 k 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.1 M 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 171 k 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout:Upgrading: 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.4 M 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.2 M 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout:Installing dependencies: 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: abseil-cpp x86_64 20211102.0-4.el9 epel 551 k 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: boost-program-options x86_64 1.75.0-13.el9 appstream 104 k 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 22 M 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 31 k 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 2.4 M 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 253 k 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 4.7 M 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 17 M 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 17 k 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 25 k 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: cryptsetup x86_64 2.8.1-3.el9 baseos 351 k 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: flexiblas x86_64 3.0.4-9.el9 appstream 30 k 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 appstream 3.0 M 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 appstream 15 k 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: gperftools-libs x86_64 2.9.1-3.el9 epel 308 k 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: grpc-data noarch 1.46.7-10.el9 epel 19 k 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: ledmon-libs x86_64 1.1.0-3.el9 baseos 40 k 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: libarrow x86_64 9.0.0-15.el9 epel 4.4 M 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: libarrow-doc noarch 9.0.0-15.el9 epel 25 k 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 163 k 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: libconfig x86_64 1.7.2-9.el9 baseos 72 k 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: libgfortran x86_64 11.5.0-14.el9 baseos 794 k 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: libnbd x86_64 1.20.3-4.el9 appstream 164 k 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: liboath x86_64 2.6.12-1.el9 epel 49 k 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: libpmemobj x86_64 1.12.1-1.el9 appstream 160 k 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: libquadmath x86_64 11.5.0-14.el9 baseos 184 k 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: librabbitmq x86_64 0.11.0-7.el9 appstream 45 k 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 503 k 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: librdkafka x86_64 1.6.1-102.el9 appstream 662 k 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.4 M 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: libstoragemgmt x86_64 1.10.1-1.el9 appstream 246 k 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: libunwind x86_64 1.6.2-1.el9 epel 67 k 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: libxslt x86_64 1.1.34-12.el9 appstream 233 k 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: lttng-ust x86_64 2.12.0-6.el9 appstream 292 k 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: lua x86_64 5.4.4-4.el9 appstream 188 k 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: lua-devel x86_64 5.4.4-4.el9 crb 22 k 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: luarocks noarch 3.9.2-5.el9 epel 151 k 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: mailcap noarch 2.1.49-5.el9 baseos 33 k 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: openblas x86_64 0.3.29-1.el9 appstream 42 k 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: openblas-openmp x86_64 0.3.29-1.el9 appstream 5.3 M 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: parquet-libs x86_64 9.0.0-15.el9 epel 838 k 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: pciutils x86_64 3.7.0-7.el9 baseos 93 k 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: protobuf x86_64 3.14.0-17.el9 appstream 1.0 M 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: protobuf-compiler x86_64 3.14.0-17.el9 crb 862 k 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: python3-asyncssh noarch 2.13.2-5.el9 epel 548 k 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: python3-autocommand noarch 2.2.2-8.el9 epel 29 k 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: python3-babel noarch 2.9.1-2.el9 appstream 6.0 M 2026-03-09T13:30:04.382 INFO:teuthology.orchestra.run.vm04.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 epel 60 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-bcrypt x86_64 3.2.2-1.el9 epel 43 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools noarch 4.2.4-1.el9 epel 32 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 45 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 142 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-certifi noarch 2023.05.07-4.el9 epel 14 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-cffi x86_64 1.14.5-5.el9 baseos 253 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-cheroot noarch 10.0.1-4.el9 epel 173 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-cherrypy noarch 18.6.1-2.el9 epel 358 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-cryptography x86_64 36.0.1-5.el9 baseos 1.2 M 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-devel x86_64 3.9.25-3.el9 appstream 244 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-google-auth noarch 1:2.45.0-1.el9 epel 254 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-grpcio x86_64 1.46.7-10.el9 epel 2.0 M 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 epel 144 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco noarch 8.2.1-3.el9 epel 11 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 epel 18 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 epel 23 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-context noarch 6.0.1-3.el9 epel 20 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 epel 19 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-text noarch 4.0.0-2.el9 epel 26 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-jinja2 noarch 2.11.3-8.el9 appstream 249 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 epel 1.0 M 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 appstream 177 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-logutils noarch 0.3.5-21.el9 epel 46 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-mako noarch 1.1.4-6.el9 appstream 172 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-markupsafe x86_64 1.1.1-12.el9 appstream 35 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-more-itertools noarch 8.12.0-2.el9 epel 79 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-natsort noarch 7.1.1-5.el9 epel 58 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-numpy x86_64 1:1.23.5-2.el9 appstream 6.1 M 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 appstream 442 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-packaging noarch 20.9-5.el9 appstream 77 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-pecan noarch 1.4.2-3.el9 epel 272 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-ply noarch 3.11-14.el9 baseos 106 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-portend noarch 3.1.0-2.el9 epel 16 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-protobuf noarch 3.14.0-17.el9 appstream 267 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 epel 90 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyasn1 noarch 0.4.8-7.el9 appstream 157 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 appstream 277 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-pycparser noarch 2.20-6.el9 baseos 135 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze-lru noarch 0.7-16.el9 epel 31 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-requests noarch 2.25.1-10.el9 baseos 126 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 appstream 54 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-routes noarch 2.5.1-5.el9 epel 188 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-rsa noarch 4.9-2.el9 epel 59 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-scipy x86_64 1.9.3-2.el9 appstream 19 M 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-tempora noarch 5.0.0-2.el9 epel 36 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-toml noarch 0.10.2-6.el9 appstream 42 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-typing-extensions noarch 4.15.0-1.el9 epel 86 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-urllib3 noarch 1.26.5-7.el9 baseos 218 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-webob noarch 1.8.8-2.el9 epel 230 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket-client noarch 1.2.3-2.el9 epel 90 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 epel 427 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-zc-lockfile noarch 2.0-10.el9 epel 20 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: qatlib x86_64 25.08.0-2.el9 appstream 240 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: qatzip-libs x86_64 1.3.1-1.el9 appstream 66 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: re2 x86_64 1:20211101-20.el9 epel 191 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: socat x86_64 1.7.4.1-8.el9 appstream 303 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: thrift x86_64 0.15.0-4.el9 epel 1.6 M 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: unzip x86_64 6.0-59.el9 baseos 182 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: xmlstarlet x86_64 1.6.1-20.el9 appstream 64 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: zip x86_64 3.0-35.el9 baseos 266 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout:Installing weak dependencies: 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: qatlib-service x86_64 25.08.0-2.el9 appstream 37 k 2026-03-09T13:30:04.383 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:30:04.384 INFO:teuthology.orchestra.run.vm04.stdout:Transaction Summary 2026-03-09T13:30:04.384 INFO:teuthology.orchestra.run.vm04.stdout:====================================================================================== 2026-03-09T13:30:04.384 INFO:teuthology.orchestra.run.vm04.stdout:Install 134 Packages 2026-03-09T13:30:04.384 INFO:teuthology.orchestra.run.vm04.stdout:Upgrade 2 Packages 2026-03-09T13:30:04.384 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:30:04.384 INFO:teuthology.orchestra.run.vm04.stdout:Total download size: 210 M 2026-03-09T13:30:04.384 INFO:teuthology.orchestra.run.vm04.stdout:Downloading Packages: 2026-03-09T13:30:06.092 INFO:teuthology.orchestra.run.vm04.stdout:(1/136): ceph-19.2.3-678.ge911bdeb.el9.x86_64.r 13 kB/s | 6.5 kB 00:00 2026-03-09T13:30:06.937 INFO:teuthology.orchestra.run.vm04.stdout:(2/136): ceph-fuse-19.2.3-678.ge911bdeb.el9.x86 1.4 MB/s | 1.2 MB 00:00 2026-03-09T13:30:07.061 INFO:teuthology.orchestra.run.vm04.stdout:(3/136): ceph-immutable-object-cache-19.2.3-678 1.1 MB/s | 145 kB 00:00 2026-03-09T13:30:07.452 INFO:teuthology.orchestra.run.vm04.stdout:(4/136): ceph-base-19.2.3-678.ge911bdeb.el9.x86 3.0 MB/s | 5.5 MB 00:01 2026-03-09T13:30:07.553 INFO:teuthology.orchestra.run.vm04.stdout:(5/136): ceph-mds-19.2.3-678.ge911bdeb.el9.x86_ 4.9 MB/s | 2.4 MB 00:00 2026-03-09T13:30:07.709 INFO:teuthology.orchestra.run.vm04.stdout:(6/136): ceph-mgr-19.2.3-678.ge911bdeb.el9.x86_ 4.2 MB/s | 1.1 MB 00:00 2026-03-09T13:30:08.524 INFO:teuthology.orchestra.run.vm04.stdout:(7/136): ceph-mon-19.2.3-678.ge911bdeb.el9.x86_ 4.9 MB/s | 4.7 MB 00:00 2026-03-09T13:30:08.661 INFO:teuthology.orchestra.run.vm04.stdout:(8/136): ceph-common-19.2.3-678.ge911bdeb.el9.x 7.1 MB/s | 22 MB 00:03 2026-03-09T13:30:08.777 INFO:teuthology.orchestra.run.vm04.stdout:(9/136): ceph-selinux-19.2.3-678.ge911bdeb.el9. 216 kB/s | 25 kB 00:00 2026-03-09T13:30:10.301 INFO:teuthology.orchestra.run.vm04.stdout:(10/136): ceph-osd-19.2.3-678.ge911bdeb.el9.x86 6.6 MB/s | 17 MB 00:02 2026-03-09T13:30:10.427 INFO:teuthology.orchestra.run.vm04.stdout:(11/136): libcephfs-devel-19.2.3-678.ge911bdeb. 267 kB/s | 34 kB 00:00 2026-03-09T13:30:10.671 INFO:teuthology.orchestra.run.vm04.stdout:(12/136): libcephfs2-19.2.3-678.ge911bdeb.el9.x 4.0 MB/s | 1.0 MB 00:00 2026-03-09T13:30:10.793 INFO:teuthology.orchestra.run.vm04.stdout:(13/136): libcephsqlite-19.2.3-678.ge911bdeb.el 1.3 MB/s | 163 kB 00:00 2026-03-09T13:30:10.916 INFO:teuthology.orchestra.run.vm04.stdout:(14/136): librados-devel-19.2.3-678.ge911bdeb.e 1.0 MB/s | 127 kB 00:00 2026-03-09T13:30:10.955 INFO:teuthology.orchestra.run.vm04.stdout:(15/136): ceph-radosgw-19.2.3-678.ge911bdeb.el9 4.4 MB/s | 11 MB 00:02 2026-03-09T13:30:11.043 INFO:teuthology.orchestra.run.vm04.stdout:(16/136): libradosstriper1-19.2.3-678.ge911bdeb 3.9 MB/s | 503 kB 00:00 2026-03-09T13:30:11.164 INFO:teuthology.orchestra.run.vm04.stdout:(17/136): python3-ceph-argparse-19.2.3-678.ge91 372 kB/s | 45 kB 00:00 2026-03-09T13:30:11.286 INFO:teuthology.orchestra.run.vm04.stdout:(18/136): python3-ceph-common-19.2.3-678.ge911b 1.1 MB/s | 142 kB 00:00 2026-03-09T13:30:11.409 INFO:teuthology.orchestra.run.vm04.stdout:(19/136): python3-cephfs-19.2.3-678.ge911bdeb.e 1.3 MB/s | 165 kB 00:00 2026-03-09T13:30:11.534 INFO:teuthology.orchestra.run.vm04.stdout:(20/136): python3-rados-19.2.3-678.ge911bdeb.el 2.5 MB/s | 323 kB 00:00 2026-03-09T13:30:11.658 INFO:teuthology.orchestra.run.vm04.stdout:(21/136): python3-rbd-19.2.3-678.ge911bdeb.el9. 2.4 MB/s | 303 kB 00:00 2026-03-09T13:30:11.781 INFO:teuthology.orchestra.run.vm04.stdout:(22/136): python3-rgw-19.2.3-678.ge911bdeb.el9. 818 kB/s | 100 kB 00:00 2026-03-09T13:30:11.903 INFO:teuthology.orchestra.run.vm04.stdout:(23/136): rbd-fuse-19.2.3-678.ge911bdeb.el9.x86 699 kB/s | 85 kB 00:00 2026-03-09T13:30:12.176 INFO:teuthology.orchestra.run.vm04.stdout:(24/136): librgw2-19.2.3-678.ge911bdeb.el9.x86_ 4.4 MB/s | 5.4 MB 00:01 2026-03-09T13:30:12.298 INFO:teuthology.orchestra.run.vm04.stdout:(25/136): rbd-nbd-19.2.3-678.ge911bdeb.el9.x86_ 1.4 MB/s | 171 kB 00:00 2026-03-09T13:30:12.422 INFO:teuthology.orchestra.run.vm04.stdout:(26/136): ceph-grafana-dashboards-19.2.3-678.ge 252 kB/s | 31 kB 00:00 2026-03-09T13:30:12.529 INFO:teuthology.orchestra.run.vm04.stdout:(27/136): rbd-mirror-19.2.3-678.ge911bdeb.el9.x 5.0 MB/s | 3.1 MB 00:00 2026-03-09T13:30:12.949 INFO:teuthology.orchestra.run.vm04.stdout:(28/136): ceph-test-19.2.3-678.ge911bdeb.el9.x8 12 MB/s | 50 MB 00:04 2026-03-09T13:30:13.314 INFO:teuthology.orchestra.run.vm04.stdout:(29/136): ceph-mgr-cephadm-19.2.3-678.ge911bdeb 169 kB/s | 150 kB 00:00 2026-03-09T13:30:13.438 INFO:teuthology.orchestra.run.vm04.stdout:(30/136): ceph-mgr-modules-core-19.2.3-678.ge91 2.0 MB/s | 253 kB 00:00 2026-03-09T13:30:13.560 INFO:teuthology.orchestra.run.vm04.stdout:(31/136): ceph-mgr-rook-19.2.3-678.ge911bdeb.el 406 kB/s | 49 kB 00:00 2026-03-09T13:30:13.681 INFO:teuthology.orchestra.run.vm04.stdout:(32/136): ceph-prometheus-alerts-19.2.3-678.ge9 138 kB/s | 17 kB 00:00 2026-03-09T13:30:13.805 INFO:teuthology.orchestra.run.vm04.stdout:(33/136): ceph-volume-19.2.3-678.ge911bdeb.el9. 2.3 MB/s | 299 kB 00:00 2026-03-09T13:30:13.908 INFO:teuthology.orchestra.run.vm04.stdout:(34/136): ceph-mgr-dashboard-19.2.3-678.ge911bd 2.8 MB/s | 3.8 MB 00:01 2026-03-09T13:30:14.051 INFO:teuthology.orchestra.run.vm04.stdout:(35/136): cephadm-19.2.3-678.ge911bdeb.el9.noar 3.1 MB/s | 769 kB 00:00 2026-03-09T13:30:14.193 INFO:teuthology.orchestra.run.vm04.stdout:(36/136): cryptsetup-2.8.1-3.el9.x86_64.rpm 1.2 MB/s | 351 kB 00:00 2026-03-09T13:30:14.193 INFO:teuthology.orchestra.run.vm04.stdout:(37/136): ledmon-libs-1.1.0-3.el9.x86_64.rpm 285 kB/s | 40 kB 00:00 2026-03-09T13:30:14.242 INFO:teuthology.orchestra.run.vm04.stdout:(38/136): libconfig-1.7.2-9.el9.x86_64.rpm 1.4 MB/s | 72 kB 00:00 2026-03-09T13:30:14.294 INFO:teuthology.orchestra.run.vm04.stdout:(39/136): libquadmath-11.5.0-14.el9.x86_64.rpm 3.5 MB/s | 184 kB 00:00 2026-03-09T13:30:14.342 INFO:teuthology.orchestra.run.vm04.stdout:(40/136): ceph-mgr-diskprediction-local-19.2.3- 5.3 MB/s | 7.4 MB 00:01 2026-03-09T13:30:14.343 INFO:teuthology.orchestra.run.vm04.stdout:(41/136): mailcap-2.1.49-5.el9.noarch.rpm 677 kB/s | 33 kB 00:00 2026-03-09T13:30:14.391 INFO:teuthology.orchestra.run.vm04.stdout:(42/136): libgfortran-11.5.0-14.el9.x86_64.rpm 3.9 MB/s | 794 kB 00:00 2026-03-09T13:30:14.395 INFO:teuthology.orchestra.run.vm04.stdout:(43/136): python3-cffi-1.14.5-5.el9.x86_64.rpm 4.7 MB/s | 253 kB 00:00 2026-03-09T13:30:14.457 INFO:teuthology.orchestra.run.vm04.stdout:(44/136): python3-ply-3.11-14.el9.noarch.rpm 1.7 MB/s | 106 kB 00:00 2026-03-09T13:30:14.508 INFO:teuthology.orchestra.run.vm04.stdout:(45/136): python3-pycparser-2.20-6.el9.noarch.r 2.6 MB/s | 135 kB 00:00 2026-03-09T13:30:14.547 INFO:teuthology.orchestra.run.vm04.stdout:(46/136): pciutils-3.7.0-7.el9.x86_64.rpm 455 kB/s | 93 kB 00:00 2026-03-09T13:30:14.558 INFO:teuthology.orchestra.run.vm04.stdout:(47/136): python3-requests-2.25.1-10.el9.noarch 2.5 MB/s | 126 kB 00:00 2026-03-09T13:30:14.589 INFO:teuthology.orchestra.run.vm04.stdout:(48/136): python3-cryptography-36.0.1-5.el9.x86 6.3 MB/s | 1.2 MB 00:00 2026-03-09T13:30:14.617 INFO:teuthology.orchestra.run.vm04.stdout:(49/136): unzip-6.0-59.el9.x86_64.rpm 3.1 MB/s | 182 kB 00:00 2026-03-09T13:30:14.641 INFO:teuthology.orchestra.run.vm04.stdout:(50/136): zip-3.0-35.el9.x86_64.rpm 5.0 MB/s | 266 kB 00:00 2026-03-09T13:30:14.644 INFO:teuthology.orchestra.run.vm04.stdout:(51/136): python3-urllib3-1.26.5-7.el9.noarch.r 2.2 MB/s | 218 kB 00:00 2026-03-09T13:30:14.803 INFO:teuthology.orchestra.run.vm04.stdout:(52/136): flexiblas-3.0.4-9.el9.x86_64.rpm 183 kB/s | 30 kB 00:00 2026-03-09T13:30:14.839 INFO:teuthology.orchestra.run.vm04.stdout:(53/136): boost-program-options-1.75.0-13.el9.x 468 kB/s | 104 kB 00:00 2026-03-09T13:30:14.856 INFO:teuthology.orchestra.run.vm04.stdout:(54/136): flexiblas-openblas-openmp-3.0.4-9.el9 282 kB/s | 15 kB 00:00 2026-03-09T13:30:14.919 INFO:teuthology.orchestra.run.vm04.stdout:(55/136): libnbd-1.20.3-4.el9.x86_64.rpm 2.0 MB/s | 164 kB 00:00 2026-03-09T13:30:14.934 INFO:teuthology.orchestra.run.vm04.stdout:(56/136): libpmemobj-1.12.1-1.el9.x86_64.rpm 2.0 MB/s | 160 kB 00:00 2026-03-09T13:30:14.968 INFO:teuthology.orchestra.run.vm04.stdout:(57/136): librabbitmq-0.11.0-7.el9.x86_64.rpm 924 kB/s | 45 kB 00:00 2026-03-09T13:30:14.986 INFO:teuthology.orchestra.run.vm04.stdout:(58/136): flexiblas-netlib-3.0.4-9.el9.x86_64.r 8.7 MB/s | 3.0 MB 00:00 2026-03-09T13:30:15.014 INFO:teuthology.orchestra.run.vm04.stdout:(59/136): librdkafka-1.6.1-102.el9.x86_64.rpm 8.2 MB/s | 662 kB 00:00 2026-03-09T13:30:15.026 INFO:teuthology.orchestra.run.vm04.stdout:(60/136): libstoragemgmt-1.10.1-1.el9.x86_64.rp 4.2 MB/s | 246 kB 00:00 2026-03-09T13:30:15.050 INFO:teuthology.orchestra.run.vm04.stdout:(61/136): libxslt-1.1.34-12.el9.x86_64.rpm 3.6 MB/s | 233 kB 00:00 2026-03-09T13:30:15.082 INFO:teuthology.orchestra.run.vm04.stdout:(62/136): lua-5.4.4-4.el9.x86_64.rpm 3.3 MB/s | 188 kB 00:00 2026-03-09T13:30:15.085 INFO:teuthology.orchestra.run.vm04.stdout:(63/136): lttng-ust-2.12.0-6.el9.x86_64.rpm 4.0 MB/s | 292 kB 00:00 2026-03-09T13:30:15.098 INFO:teuthology.orchestra.run.vm04.stdout:(64/136): openblas-0.3.29-1.el9.x86_64.rpm 861 kB/s | 42 kB 00:00 2026-03-09T13:30:15.177 INFO:teuthology.orchestra.run.vm04.stdout:(65/136): protobuf-3.14.0-17.el9.x86_64.rpm 11 MB/s | 1.0 MB 00:00 2026-03-09T13:30:15.287 INFO:teuthology.orchestra.run.vm04.stdout:(66/136): python3-devel-3.9.25-3.el9.x86_64.rpm 2.2 MB/s | 244 kB 00:00 2026-03-09T13:30:15.314 INFO:teuthology.orchestra.run.vm04.stdout:(67/136): openblas-openmp-0.3.29-1.el9.x86_64.r 23 MB/s | 5.3 MB 00:00 2026-03-09T13:30:15.340 INFO:teuthology.orchestra.run.vm04.stdout:(68/136): python3-babel-2.9.1-2.el9.noarch.rpm 25 MB/s | 6.0 MB 00:00 2026-03-09T13:30:15.347 INFO:teuthology.orchestra.run.vm04.stdout:(69/136): python3-jinja2-2.11.3-8.el9.noarch.rp 4.0 MB/s | 249 kB 00:00 2026-03-09T13:30:15.364 INFO:teuthology.orchestra.run.vm04.stdout:(70/136): python3-jmespath-1.0.1-1.el9.noarch.r 981 kB/s | 48 kB 00:00 2026-03-09T13:30:15.400 INFO:teuthology.orchestra.run.vm04.stdout:(71/136): python3-libstoragemgmt-1.10.1-1.el9.x 2.9 MB/s | 177 kB 00:00 2026-03-09T13:30:15.402 INFO:teuthology.orchestra.run.vm04.stdout:(72/136): python3-mako-1.1.4-6.el9.noarch.rpm 3.0 MB/s | 172 kB 00:00 2026-03-09T13:30:15.412 INFO:teuthology.orchestra.run.vm04.stdout:(73/136): python3-markupsafe-1.1.1-12.el9.x86_6 720 kB/s | 35 kB 00:00 2026-03-09T13:30:15.468 INFO:teuthology.orchestra.run.vm04.stdout:(74/136): python3-packaging-20.9-5.el9.noarch.r 1.4 MB/s | 77 kB 00:00 2026-03-09T13:30:15.483 INFO:teuthology.orchestra.run.vm04.stdout:(75/136): python3-numpy-f2py-1.23.5-2.el9.x86_6 5.4 MB/s | 442 kB 00:00 2026-03-09T13:30:15.533 INFO:teuthology.orchestra.run.vm04.stdout:(76/136): python3-numpy-1.23.5-2.el9.x86_64.rpm 46 MB/s | 6.1 MB 00:00 2026-03-09T13:30:15.535 INFO:teuthology.orchestra.run.vm04.stdout:(77/136): python3-protobuf-3.14.0-17.el9.noarch 3.9 MB/s | 267 kB 00:00 2026-03-09T13:30:15.535 INFO:teuthology.orchestra.run.vm04.stdout:(78/136): python3-pyasn1-0.4.8-7.el9.noarch.rpm 2.9 MB/s | 157 kB 00:00 2026-03-09T13:30:15.587 INFO:teuthology.orchestra.run.vm04.stdout:(79/136): python3-pyasn1-modules-0.4.8-7.el9.no 5.1 MB/s | 277 kB 00:00 2026-03-09T13:30:15.626 INFO:teuthology.orchestra.run.vm04.stdout:(80/136): python3-requests-oauthlib-1.3.0-12.el 598 kB/s | 54 kB 00:00 2026-03-09T13:30:15.646 INFO:teuthology.orchestra.run.vm04.stdout:(81/136): python3-toml-0.10.2-6.el9.noarch.rpm 713 kB/s | 42 kB 00:00 2026-03-09T13:30:15.696 INFO:teuthology.orchestra.run.vm04.stdout:(82/136): qatlib-25.08.0-2.el9.x86_64.rpm 3.4 MB/s | 240 kB 00:00 2026-03-09T13:30:15.707 INFO:teuthology.orchestra.run.vm04.stdout:(83/136): qatlib-service-25.08.0-2.el9.x86_64.r 604 kB/s | 37 kB 00:00 2026-03-09T13:30:15.760 INFO:teuthology.orchestra.run.vm04.stdout:(84/136): qatzip-libs-1.3.1-1.el9.x86_64.rpm 1.0 MB/s | 66 kB 00:00 2026-03-09T13:30:15.769 INFO:teuthology.orchestra.run.vm04.stdout:(85/136): socat-1.7.4.1-8.el9.x86_64.rpm 4.8 MB/s | 303 kB 00:00 2026-03-09T13:30:15.819 INFO:teuthology.orchestra.run.vm04.stdout:(86/136): xmlstarlet-1.6.1-20.el9.x86_64.rpm 1.1 MB/s | 64 kB 00:00 2026-03-09T13:30:15.881 INFO:teuthology.orchestra.run.vm04.stdout:(87/136): python3-scipy-1.9.3-2.el9.x86_64.rpm 56 MB/s | 19 MB 00:00 2026-03-09T13:30:15.895 INFO:teuthology.orchestra.run.vm04.stdout:(88/136): abseil-cpp-20211102.0-4.el9.x86_64.rp 39 MB/s | 551 kB 00:00 2026-03-09T13:30:15.901 INFO:teuthology.orchestra.run.vm04.stdout:(89/136): gperftools-libs-2.9.1-3.el9.x86_64.rp 51 MB/s | 308 kB 00:00 2026-03-09T13:30:15.903 INFO:teuthology.orchestra.run.vm04.stdout:(90/136): grpc-data-1.46.7-10.el9.noarch.rpm 9.9 MB/s | 19 kB 00:00 2026-03-09T13:30:15.969 INFO:teuthology.orchestra.run.vm04.stdout:(91/136): libarrow-9.0.0-15.el9.x86_64.rpm 68 MB/s | 4.4 MB 00:00 2026-03-09T13:30:15.972 INFO:teuthology.orchestra.run.vm04.stdout:(92/136): libarrow-doc-9.0.0-15.el9.noarch.rpm 9.4 MB/s | 25 kB 00:00 2026-03-09T13:30:15.975 INFO:teuthology.orchestra.run.vm04.stdout:(93/136): liboath-2.6.12-1.el9.x86_64.rpm 16 MB/s | 49 kB 00:00 2026-03-09T13:30:15.978 INFO:teuthology.orchestra.run.vm04.stdout:(94/136): libunwind-1.6.2-1.el9.x86_64.rpm 23 MB/s | 67 kB 00:00 2026-03-09T13:30:15.982 INFO:teuthology.orchestra.run.vm04.stdout:(95/136): luarocks-3.9.2-5.el9.noarch.rpm 40 MB/s | 151 kB 00:00 2026-03-09T13:30:15.995 INFO:teuthology.orchestra.run.vm04.stdout:(96/136): parquet-libs-9.0.0-15.el9.x86_64.rpm 66 MB/s | 838 kB 00:00 2026-03-09T13:30:16.004 INFO:teuthology.orchestra.run.vm04.stdout:(97/136): python3-asyncssh-2.13.2-5.el9.noarch. 64 MB/s | 548 kB 00:00 2026-03-09T13:30:16.008 INFO:teuthology.orchestra.run.vm04.stdout:(98/136): python3-autocommand-2.2.2-8.el9.noarc 7.3 MB/s | 29 kB 00:00 2026-03-09T13:30:16.011 INFO:teuthology.orchestra.run.vm04.stdout:(99/136): python3-backports-tarfile-1.2.0-1.el9 22 MB/s | 60 kB 00:00 2026-03-09T13:30:16.013 INFO:teuthology.orchestra.run.vm04.stdout:(100/136): python3-bcrypt-3.2.2-1.el9.x86_64.rp 17 MB/s | 43 kB 00:00 2026-03-09T13:30:16.016 INFO:teuthology.orchestra.run.vm04.stdout:(101/136): python3-cachetools-4.2.4-1.el9.noarc 14 MB/s | 32 kB 00:00 2026-03-09T13:30:16.018 INFO:teuthology.orchestra.run.vm04.stdout:(102/136): python3-certifi-2023.05.07-4.el9.noa 6.3 MB/s | 14 kB 00:00 2026-03-09T13:30:16.022 INFO:teuthology.orchestra.run.vm04.stdout:(103/136): python3-cheroot-10.0.1-4.el9.noarch. 45 MB/s | 173 kB 00:00 2026-03-09T13:30:16.028 INFO:teuthology.orchestra.run.vm04.stdout:(104/136): python3-cherrypy-18.6.1-2.el9.noarch 58 MB/s | 358 kB 00:00 2026-03-09T13:30:16.034 INFO:teuthology.orchestra.run.vm04.stdout:(105/136): python3-google-auth-2.45.0-1.el9.noa 45 MB/s | 254 kB 00:00 2026-03-09T13:30:16.062 INFO:teuthology.orchestra.run.vm04.stdout:(106/136): python3-grpcio-1.46.7-10.el9.x86_64. 75 MB/s | 2.0 MB 00:00 2026-03-09T13:30:16.065 INFO:teuthology.orchestra.run.vm04.stdout:(107/136): python3-grpcio-tools-1.46.7-10.el9.x 40 MB/s | 144 kB 00:00 2026-03-09T13:30:16.068 INFO:teuthology.orchestra.run.vm04.stdout:(108/136): python3-jaraco-8.2.1-3.el9.noarch.rp 3.8 MB/s | 11 kB 00:00 2026-03-09T13:30:16.070 INFO:teuthology.orchestra.run.vm04.stdout:(109/136): python3-jaraco-classes-3.2.1-5.el9.n 8.0 MB/s | 18 kB 00:00 2026-03-09T13:30:16.073 INFO:teuthology.orchestra.run.vm04.stdout:(110/136): python3-jaraco-collections-3.0.0-8.e 10 MB/s | 23 kB 00:00 2026-03-09T13:30:16.075 INFO:teuthology.orchestra.run.vm04.stdout:(111/136): python3-jaraco-context-6.0.1-3.el9.n 9.0 MB/s | 20 kB 00:00 2026-03-09T13:30:16.077 INFO:teuthology.orchestra.run.vm04.stdout:(112/136): python3-jaraco-functools-3.5.0-2.el9 9.0 MB/s | 19 kB 00:00 2026-03-09T13:30:16.080 INFO:teuthology.orchestra.run.vm04.stdout:(113/136): python3-jaraco-text-4.0.0-2.el9.noar 10 MB/s | 26 kB 00:00 2026-03-09T13:30:16.095 INFO:teuthology.orchestra.run.vm04.stdout:(114/136): python3-kubernetes-26.1.0-3.el9.noar 70 MB/s | 1.0 MB 00:00 2026-03-09T13:30:16.098 INFO:teuthology.orchestra.run.vm04.stdout:(115/136): python3-logutils-0.3.5-21.el9.noarch 17 MB/s | 46 kB 00:00 2026-03-09T13:30:16.101 INFO:teuthology.orchestra.run.vm04.stdout:(116/136): python3-more-itertools-8.12.0-2.el9. 28 MB/s | 79 kB 00:00 2026-03-09T13:30:16.104 INFO:teuthology.orchestra.run.vm04.stdout:(117/136): python3-natsort-7.1.1-5.el9.noarch.r 19 MB/s | 58 kB 00:00 2026-03-09T13:30:16.110 INFO:teuthology.orchestra.run.vm04.stdout:(118/136): python3-pecan-1.4.2-3.el9.noarch.rpm 44 MB/s | 272 kB 00:00 2026-03-09T13:30:16.115 INFO:teuthology.orchestra.run.vm04.stdout:(119/136): python3-portend-3.1.0-2.el9.noarch.r 3.8 MB/s | 16 kB 00:00 2026-03-09T13:30:16.119 INFO:teuthology.orchestra.run.vm04.stdout:(120/136): python3-pyOpenSSL-21.0.0-1.el9.noarc 22 MB/s | 90 kB 00:00 2026-03-09T13:30:16.121 INFO:teuthology.orchestra.run.vm04.stdout:(121/136): python3-repoze-lru-0.7-16.el9.noarch 12 MB/s | 31 kB 00:00 2026-03-09T13:30:16.126 INFO:teuthology.orchestra.run.vm04.stdout:(122/136): python3-routes-2.5.1-5.el9.noarch.rp 39 MB/s | 188 kB 00:00 2026-03-09T13:30:16.129 INFO:teuthology.orchestra.run.vm04.stdout:(123/136): python3-rsa-4.9-2.el9.noarch.rpm 20 MB/s | 59 kB 00:00 2026-03-09T13:30:16.132 INFO:teuthology.orchestra.run.vm04.stdout:(124/136): python3-tempora-5.0.0-2.el9.noarch.r 14 MB/s | 36 kB 00:00 2026-03-09T13:30:16.135 INFO:teuthology.orchestra.run.vm04.stdout:(125/136): python3-typing-extensions-4.15.0-1.e 28 MB/s | 86 kB 00:00 2026-03-09T13:30:16.140 INFO:teuthology.orchestra.run.vm04.stdout:(126/136): python3-webob-1.8.8-2.el9.noarch.rpm 49 MB/s | 230 kB 00:00 2026-03-09T13:30:16.140 INFO:teuthology.orchestra.run.vm04.stdout:(127/136): lua-devel-5.4.4-4.el9.x86_64.rpm 60 kB/s | 22 kB 00:00 2026-03-09T13:30:16.144 INFO:teuthology.orchestra.run.vm04.stdout:(128/136): python3-websocket-client-1.2.3-2.el9 20 MB/s | 90 kB 00:00 2026-03-09T13:30:16.147 INFO:teuthology.orchestra.run.vm04.stdout:(129/136): python3-xmltodict-0.12.0-15.el9.noar 8.1 MB/s | 22 kB 00:00 2026-03-09T13:30:16.150 INFO:teuthology.orchestra.run.vm04.stdout:(130/136): python3-zc-lockfile-2.0-10.el9.noarc 7.4 MB/s | 20 kB 00:00 2026-03-09T13:30:16.152 INFO:teuthology.orchestra.run.vm04.stdout:(131/136): python3-werkzeug-2.0.3-3.el9.1.noarc 38 MB/s | 427 kB 00:00 2026-03-09T13:30:16.154 INFO:teuthology.orchestra.run.vm04.stdout:(132/136): re2-20211101-20.el9.x86_64.rpm 49 MB/s | 191 kB 00:00 2026-03-09T13:30:16.174 INFO:teuthology.orchestra.run.vm04.stdout:(133/136): thrift-0.15.0-4.el9.x86_64.rpm 71 MB/s | 1.6 MB 00:00 2026-03-09T13:30:16.354 INFO:teuthology.orchestra.run.vm04.stdout:(134/136): protobuf-compiler-3.14.0-17.el9.x86_ 1.6 MB/s | 862 kB 00:00 2026-03-09T13:30:17.327 INFO:teuthology.orchestra.run.vm04.stdout:(135/136): librbd1-19.2.3-678.ge911bdeb.el9.x86 2.7 MB/s | 3.2 MB 00:01 2026-03-09T13:30:17.601 INFO:teuthology.orchestra.run.vm04.stdout:(136/136): librados2-19.2.3-678.ge911bdeb.el9.x 2.4 MB/s | 3.4 MB 00:01 2026-03-09T13:30:17.606 INFO:teuthology.orchestra.run.vm04.stdout:-------------------------------------------------------------------------------- 2026-03-09T13:30:17.606 INFO:teuthology.orchestra.run.vm04.stdout:Total 16 MB/s | 210 MB 00:13 2026-03-09T13:30:18.187 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction check 2026-03-09T13:30:18.242 INFO:teuthology.orchestra.run.vm04.stdout:Transaction check succeeded. 2026-03-09T13:30:18.242 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction test 2026-03-09T13:30:19.117 INFO:teuthology.orchestra.run.vm04.stdout:Transaction test succeeded. 2026-03-09T13:30:19.117 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction 2026-03-09T13:30:20.054 INFO:teuthology.orchestra.run.vm04.stdout: Preparing : 1/1 2026-03-09T13:30:20.071 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-more-itertools-8.12.0-2.el9.noarch 1/138 2026-03-09T13:30:20.085 INFO:teuthology.orchestra.run.vm04.stdout: Installing : thrift-0.15.0-4.el9.x86_64 2/138 2026-03-09T13:30:20.262 INFO:teuthology.orchestra.run.vm04.stdout: Installing : lttng-ust-2.12.0-6.el9.x86_64 3/138 2026-03-09T13:30:20.265 INFO:teuthology.orchestra.run.vm04.stdout: Upgrading : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-09T13:30:20.328 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-09T13:30:20.331 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/138 2026-03-09T13:30:20.360 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/138 2026-03-09T13:30:20.370 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 6/138 2026-03-09T13:30:20.374 INFO:teuthology.orchestra.run.vm04.stdout: Installing : librdkafka-1.6.1-102.el9.x86_64 7/138 2026-03-09T13:30:20.376 INFO:teuthology.orchestra.run.vm04.stdout: Installing : librabbitmq-0.11.0-7.el9.x86_64 8/138 2026-03-09T13:30:20.381 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-jaraco-8.2.1-3.el9.noarch 9/138 2026-03-09T13:30:20.392 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libnbd-1.20.3-4.el9.x86_64 10/138 2026-03-09T13:30:20.393 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-09T13:30:20.433 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-09T13:30:20.434 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 12/138 2026-03-09T13:30:20.450 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 12/138 2026-03-09T13:30:20.487 INFO:teuthology.orchestra.run.vm04.stdout: Installing : re2-1:20211101-20.el9.x86_64 13/138 2026-03-09T13:30:20.529 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libarrow-9.0.0-15.el9.x86_64 14/138 2026-03-09T13:30:20.534 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-werkzeug-2.0.3-3.el9.1.noarch 15/138 2026-03-09T13:30:20.563 INFO:teuthology.orchestra.run.vm04.stdout: Installing : liboath-2.6.12-1.el9.x86_64 16/138 2026-03-09T13:30:20.578 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-pyasn1-0.4.8-7.el9.noarch 17/138 2026-03-09T13:30:20.587 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-packaging-20.9-5.el9.noarch 18/138 2026-03-09T13:30:20.598 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-markupsafe-1.1.1-12.el9.x86_64 19/138 2026-03-09T13:30:20.606 INFO:teuthology.orchestra.run.vm04.stdout: Installing : protobuf-3.14.0-17.el9.x86_64 20/138 2026-03-09T13:30:20.609 INFO:teuthology.orchestra.run.vm04.stdout: Installing : lua-5.4.4-4.el9.x86_64 21/138 2026-03-09T13:30:20.616 INFO:teuthology.orchestra.run.vm04.stdout: Installing : flexiblas-3.0.4-9.el9.x86_64 22/138 2026-03-09T13:30:20.646 INFO:teuthology.orchestra.run.vm04.stdout: Installing : unzip-6.0-59.el9.x86_64 23/138 2026-03-09T13:30:20.667 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-urllib3-1.26.5-7.el9.noarch 24/138 2026-03-09T13:30:20.672 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-requests-2.25.1-10.el9.noarch 25/138 2026-03-09T13:30:20.680 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libquadmath-11.5.0-14.el9.x86_64 26/138 2026-03-09T13:30:20.683 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libgfortran-11.5.0-14.el9.x86_64 27/138 2026-03-09T13:30:20.717 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ledmon-libs-1.1.0-3.el9.x86_64 28/138 2026-03-09T13:30:20.724 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 29/138 2026-03-09T13:30:20.735 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 30/138 2026-03-09T13:30:20.752 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 31/138 2026-03-09T13:30:20.762 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-requests-oauthlib-1.3.0-12.el9.noarch 32/138 2026-03-09T13:30:20.795 INFO:teuthology.orchestra.run.vm04.stdout: Installing : zip-3.0-35.el9.x86_64 33/138 2026-03-09T13:30:20.801 INFO:teuthology.orchestra.run.vm04.stdout: Installing : luarocks-3.9.2-5.el9.noarch 34/138 2026-03-09T13:30:20.810 INFO:teuthology.orchestra.run.vm04.stdout: Installing : lua-devel-5.4.4-4.el9.x86_64 35/138 2026-03-09T13:30:20.841 INFO:teuthology.orchestra.run.vm04.stdout: Installing : protobuf-compiler-3.14.0-17.el9.x86_64 36/138 2026-03-09T13:30:20.909 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-mako-1.1.4-6.el9.noarch 37/138 2026-03-09T13:30:20.930 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-pyasn1-modules-0.4.8-7.el9.noarch 38/138 2026-03-09T13:30:20.940 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-rsa-4.9-2.el9.noarch 39/138 2026-03-09T13:30:20.950 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-jaraco-classes-3.2.1-5.el9.noarch 40/138 2026-03-09T13:30:20.956 INFO:teuthology.orchestra.run.vm04.stdout: Installing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 41/138 2026-03-09T13:30:20.961 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-zc-lockfile-2.0-10.el9.noarch 42/138 2026-03-09T13:30:20.982 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-xmltodict-0.12.0-15.el9.noarch 43/138 2026-03-09T13:30:21.013 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-websocket-client-1.2.3-2.el9.noarch 44/138 2026-03-09T13:30:21.020 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-webob-1.8.8-2.el9.noarch 45/138 2026-03-09T13:30:21.027 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-typing-extensions-4.15.0-1.el9.noarch 46/138 2026-03-09T13:30:21.042 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-repoze-lru-0.7-16.el9.noarch 47/138 2026-03-09T13:30:21.056 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-routes-2.5.1-5.el9.noarch 48/138 2026-03-09T13:30:21.068 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-natsort-7.1.1-5.el9.noarch 49/138 2026-03-09T13:30:21.134 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-logutils-0.3.5-21.el9.noarch 50/138 2026-03-09T13:30:21.142 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-pecan-1.4.2-3.el9.noarch 51/138 2026-03-09T13:30:21.153 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-certifi-2023.05.07-4.el9.noarch 52/138 2026-03-09T13:30:21.202 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-cachetools-4.2.4-1.el9.noarch 53/138 2026-03-09T13:30:21.595 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-google-auth-1:2.45.0-1.el9.noarch 54/138 2026-03-09T13:30:21.634 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-kubernetes-1:26.1.0-3.el9.noarch 55/138 2026-03-09T13:30:21.640 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-backports-tarfile-1.2.0-1.el9.noarch 56/138 2026-03-09T13:30:21.648 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-jaraco-context-6.0.1-3.el9.noarch 57/138 2026-03-09T13:30:21.653 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-autocommand-2.2.2-8.el9.noarch 58/138 2026-03-09T13:30:21.661 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libunwind-1.6.2-1.el9.x86_64 59/138 2026-03-09T13:30:21.665 INFO:teuthology.orchestra.run.vm04.stdout: Installing : gperftools-libs-2.9.1-3.el9.x86_64 60/138 2026-03-09T13:30:21.668 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libarrow-doc-9.0.0-15.el9.noarch 61/138 2026-03-09T13:30:21.700 INFO:teuthology.orchestra.run.vm04.stdout: Installing : grpc-data-1.46.7-10.el9.noarch 62/138 2026-03-09T13:30:21.755 INFO:teuthology.orchestra.run.vm04.stdout: Installing : abseil-cpp-20211102.0-4.el9.x86_64 63/138 2026-03-09T13:30:21.770 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-grpcio-1.46.7-10.el9.x86_64 64/138 2026-03-09T13:30:21.779 INFO:teuthology.orchestra.run.vm04.stdout: Installing : socat-1.7.4.1-8.el9.x86_64 65/138 2026-03-09T13:30:21.784 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-toml-0.10.2-6.el9.noarch 66/138 2026-03-09T13:30:21.795 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-jaraco-functools-3.5.0-2.el9.noarch 67/138 2026-03-09T13:30:21.800 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-jaraco-text-4.0.0-2.el9.noarch 68/138 2026-03-09T13:30:21.811 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-jaraco-collections-3.0.0-8.el9.noarch 69/138 2026-03-09T13:30:21.816 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-tempora-5.0.0-2.el9.noarch 70/138 2026-03-09T13:30:21.852 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-portend-3.1.0-2.el9.noarch 71/138 2026-03-09T13:30:21.868 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-protobuf-3.14.0-17.el9.noarch 72/138 2026-03-09T13:30:21.913 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-grpcio-tools-1.46.7-10.el9.x86_64 73/138 2026-03-09T13:30:22.191 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-devel-3.9.25-3.el9.x86_64 74/138 2026-03-09T13:30:22.223 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-babel-2.9.1-2.el9.noarch 75/138 2026-03-09T13:30:22.230 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-jinja2-2.11.3-8.el9.noarch 76/138 2026-03-09T13:30:22.292 INFO:teuthology.orchestra.run.vm04.stdout: Installing : openblas-0.3.29-1.el9.x86_64 77/138 2026-03-09T13:30:22.298 INFO:teuthology.orchestra.run.vm04.stdout: Installing : openblas-openmp-0.3.29-1.el9.x86_64 78/138 2026-03-09T13:30:22.322 INFO:teuthology.orchestra.run.vm04.stdout: Installing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 79/138 2026-03-09T13:30:22.714 INFO:teuthology.orchestra.run.vm04.stdout: Installing : flexiblas-netlib-3.0.4-9.el9.x86_64 80/138 2026-03-09T13:30:22.825 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-numpy-1:1.23.5-2.el9.x86_64 81/138 2026-03-09T13:30:23.678 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 82/138 2026-03-09T13:30:23.708 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-scipy-1.9.3-2.el9.x86_64 83/138 2026-03-09T13:30:23.715 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libxslt-1.1.34-12.el9.x86_64 84/138 2026-03-09T13:30:23.719 INFO:teuthology.orchestra.run.vm04.stdout: Installing : xmlstarlet-1.6.1-20.el9.x86_64 85/138 2026-03-09T13:30:23.899 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libpmemobj-1.12.1-1.el9.x86_64 86/138 2026-03-09T13:30:23.902 INFO:teuthology.orchestra.run.vm04.stdout: Upgrading : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 87/138 2026-03-09T13:30:23.938 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 87/138 2026-03-09T13:30:23.942 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 88/138 2026-03-09T13:30:23.950 INFO:teuthology.orchestra.run.vm04.stdout: Installing : boost-program-options-1.75.0-13.el9.x86_64 89/138 2026-03-09T13:30:24.209 INFO:teuthology.orchestra.run.vm04.stdout: Installing : parquet-libs-9.0.0-15.el9.x86_64 90/138 2026-03-09T13:30:24.212 INFO:teuthology.orchestra.run.vm04.stdout: Installing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 91/138 2026-03-09T13:30:24.233 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 91/138 2026-03-09T13:30:24.235 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 92/138 2026-03-09T13:30:25.345 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-09T13:30:25.350 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-09T13:30:25.373 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-09T13:30:25.390 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-ply-3.11-14.el9.noarch 94/138 2026-03-09T13:30:25.410 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-pycparser-2.20-6.el9.noarch 95/138 2026-03-09T13:30:25.499 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-cffi-1.14.5-5.el9.x86_64 96/138 2026-03-09T13:30:25.513 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-cryptography-36.0.1-5.el9.x86_64 97/138 2026-03-09T13:30:25.541 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-pyOpenSSL-21.0.0-1.el9.noarch 98/138 2026-03-09T13:30:25.579 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-cheroot-10.0.1-4.el9.noarch 99/138 2026-03-09T13:30:25.648 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-cherrypy-18.6.1-2.el9.noarch 100/138 2026-03-09T13:30:25.657 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-asyncssh-2.13.2-5.el9.noarch 101/138 2026-03-09T13:30:25.664 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-bcrypt-3.2.2-1.el9.x86_64 102/138 2026-03-09T13:30:25.670 INFO:teuthology.orchestra.run.vm04.stdout: Installing : pciutils-3.7.0-7.el9.x86_64 103/138 2026-03-09T13:30:25.673 INFO:teuthology.orchestra.run.vm04.stdout: Installing : qatlib-25.08.0-2.el9.x86_64 104/138 2026-03-09T13:30:25.676 INFO:teuthology.orchestra.run.vm04.stdout: Installing : qatlib-service-25.08.0-2.el9.x86_64 105/138 2026-03-09T13:30:25.695 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 105/138 2026-03-09T13:30:26.003 INFO:teuthology.orchestra.run.vm04.stdout: Installing : qatzip-libs-1.3.1-1.el9.x86_64 106/138 2026-03-09T13:30:26.052 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 107/138 2026-03-09T13:30:26.094 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 107/138 2026-03-09T13:30:26.094 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /usr/lib/systemd/system/ceph.target. 2026-03-09T13:30:26.094 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /usr/lib/systemd/system/ceph-crash.service. 2026-03-09T13:30:26.094 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:30:26.100 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 108/138 2026-03-09T13:30:32.894 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 108/138 2026-03-09T13:30:32.894 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /sys 2026-03-09T13:30:32.894 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /proc 2026-03-09T13:30:32.894 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /mnt 2026-03-09T13:30:32.894 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /var/tmp 2026-03-09T13:30:32.894 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /home 2026-03-09T13:30:32.894 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /root 2026-03-09T13:30:32.894 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /tmp 2026-03-09T13:30:32.894 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:30:33.023 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 109/138 2026-03-09T13:30:33.050 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 109/138 2026-03-09T13:30:33.050 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T13:30:33.051 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-09T13:30:33.051 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-09T13:30:33.051 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-09T13:30:33.051 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:30:33.326 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 110/138 2026-03-09T13:30:33.349 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 110/138 2026-03-09T13:30:33.349 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T13:30:33.349 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-09T13:30:33.349 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-09T13:30:33.349 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-09T13:30:33.350 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:30:33.398 INFO:teuthology.orchestra.run.vm04.stdout: Installing : mailcap-2.1.49-5.el9.noarch 111/138 2026-03-09T13:30:33.447 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libconfig-1.7.2-9.el9.x86_64 112/138 2026-03-09T13:30:33.557 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-09T13:30:33.557 INFO:teuthology.orchestra.run.vm04.stdout:Creating group 'qat' with GID 994. 2026-03-09T13:30:33.557 INFO:teuthology.orchestra.run.vm04.stdout:Creating group 'libstoragemgmt' with GID 993. 2026-03-09T13:30:33.557 INFO:teuthology.orchestra.run.vm04.stdout:Creating user 'libstoragemgmt' (daemon account for libstoragemgmt) with UID 993 and GID 993. 2026-03-09T13:30:33.557 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:30:33.568 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-09T13:30:33.596 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-09T13:30:33.596 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/libstoragemgmt.service → /usr/lib/systemd/system/libstoragemgmt.service. 2026-03-09T13:30:33.596 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:30:33.753 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 114/138 2026-03-09T13:30:33.829 INFO:teuthology.orchestra.run.vm04.stdout: Installing : cryptsetup-2.8.1-3.el9.x86_64 115/138 2026-03-09T13:30:33.834 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 116/138 2026-03-09T13:30:33.851 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 116/138 2026-03-09T13:30:33.851 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T13:30:33.851 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-09T13:30:33.851 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:30:34.671 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 117/138 2026-03-09T13:30:34.699 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 117/138 2026-03-09T13:30:34.699 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T13:30:34.699 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-09T13:30:34.699 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-09T13:30:34.699 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-09T13:30:34.699 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:30:34.767 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 118/138 2026-03-09T13:30:34.771 INFO:teuthology.orchestra.run.vm04.stdout: Installing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 118/138 2026-03-09T13:30:34.778 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 119/138 2026-03-09T13:30:34.803 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 120/138 2026-03-09T13:30:34.807 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 121/138 2026-03-09T13:30:35.365 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 121/138 2026-03-09T13:30:35.373 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 122/138 2026-03-09T13:30:35.909 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 122/138 2026-03-09T13:30:35.951 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 123/138 2026-03-09T13:30:36.015 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 123/138 2026-03-09T13:30:36.073 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 124/138 2026-03-09T13:30:36.104 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 125/138 2026-03-09T13:30:36.130 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 125/138 2026-03-09T13:30:36.130 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T13:30:36.130 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-09T13:30:36.130 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-09T13:30:36.130 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-09T13:30:36.130 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:30:36.174 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 126/138 2026-03-09T13:30:36.187 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 126/138 2026-03-09T13:30:36.689 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 127/138 2026-03-09T13:30:36.693 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 128/138 2026-03-09T13:30:36.717 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 128/138 2026-03-09T13:30:36.717 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T13:30:36.717 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-09T13:30:36.717 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-09T13:30:36.717 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-09T13:30:36.717 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:30:36.729 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 129/138 2026-03-09T13:30:36.750 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 129/138 2026-03-09T13:30:36.750 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T13:30:36.750 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-09T13:30:36.750 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:30:36.909 INFO:teuthology.orchestra.run.vm04.stdout: Installing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 130/138 2026-03-09T13:30:36.930 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 130/138 2026-03-09T13:30:36.930 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T13:30:36.930 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-09T13:30:36.930 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-09T13:30:36.930 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-09T13:30:36.930 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:30:39.490 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 131/138 2026-03-09T13:30:39.502 INFO:teuthology.orchestra.run.vm04.stdout: Installing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 132/138 2026-03-09T13:30:39.509 INFO:teuthology.orchestra.run.vm04.stdout: Installing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 133/138 2026-03-09T13:30:39.566 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 134/138 2026-03-09T13:30:39.576 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 135/138 2026-03-09T13:30:39.581 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-jmespath-1.0.1-1.el9.noarch 136/138 2026-03-09T13:30:39.581 INFO:teuthology.orchestra.run.vm04.stdout: Cleanup : librbd1-2:16.2.4-5.el9.x86_64 137/138 2026-03-09T13:30:39.599 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: librbd1-2:16.2.4-5.el9.x86_64 137/138 2026-03-09T13:30:39.599 INFO:teuthology.orchestra.run.vm04.stdout: Cleanup : librados2-2:16.2.4-5.el9.x86_64 138/138 2026-03-09T13:30:40.860 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: librados2-2:16.2.4-5.el9.x86_64 138/138 2026-03-09T13:30:40.860 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/138 2026-03-09T13:30:40.860 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/138 2026-03-09T13:30:40.860 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/138 2026-03-09T13:30:40.860 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-09T13:30:40.860 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/138 2026-03-09T13:30:40.860 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 6/138 2026-03-09T13:30:40.860 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 7/138 2026-03-09T13:30:40.860 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/138 2026-03-09T13:30:40.860 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 9/138 2026-03-09T13:30:40.860 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 10/138 2026-03-09T13:30:40.860 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-09T13:30:40.860 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 12/138 2026-03-09T13:30:40.860 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 13/138 2026-03-09T13:30:40.860 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 14/138 2026-03-09T13:30:40.860 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 15/138 2026-03-09T13:30:40.860 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 16/138 2026-03-09T13:30:40.860 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 17/138 2026-03-09T13:30:40.860 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 18/138 2026-03-09T13:30:40.860 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 19/138 2026-03-09T13:30:40.860 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 20/138 2026-03-09T13:30:40.860 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 21/138 2026-03-09T13:30:40.861 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 22/138 2026-03-09T13:30:40.861 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 23/138 2026-03-09T13:30:40.861 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 24/138 2026-03-09T13:30:40.861 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 25/138 2026-03-09T13:30:40.862 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 26/138 2026-03-09T13:30:40.862 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 27/138 2026-03-09T13:30:40.862 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 28/138 2026-03-09T13:30:40.862 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 29/138 2026-03-09T13:30:40.862 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 30/138 2026-03-09T13:30:40.862 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 31/138 2026-03-09T13:30:40.862 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 32/138 2026-03-09T13:30:40.862 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 33/138 2026-03-09T13:30:40.862 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 34/138 2026-03-09T13:30:40.862 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 35/138 2026-03-09T13:30:40.862 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 36/138 2026-03-09T13:30:40.862 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 37/138 2026-03-09T13:30:40.862 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 38/138 2026-03-09T13:30:40.862 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 39/138 2026-03-09T13:30:40.862 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 40/138 2026-03-09T13:30:40.862 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 41/138 2026-03-09T13:30:40.862 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 42/138 2026-03-09T13:30:40.862 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 43/138 2026-03-09T13:30:40.862 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/138 2026-03-09T13:30:40.862 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 45/138 2026-03-09T13:30:40.862 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-ply-3.11-14.el9.noarch 46/138 2026-03-09T13:30:40.862 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 47/138 2026-03-09T13:30:40.862 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 48/138 2026-03-09T13:30:40.862 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 49/138 2026-03-09T13:30:40.862 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : unzip-6.0-59.el9.x86_64 50/138 2026-03-09T13:30:40.862 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : zip-3.0-35.el9.x86_64 51/138 2026-03-09T13:30:40.862 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 52/138 2026-03-09T13:30:40.862 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 53/138 2026-03-09T13:30:40.862 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 54/138 2026-03-09T13:30:40.862 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 55/138 2026-03-09T13:30:40.862 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 56/138 2026-03-09T13:30:40.862 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 57/138 2026-03-09T13:30:40.863 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 58/138 2026-03-09T13:30:40.863 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 59/138 2026-03-09T13:30:40.863 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 60/138 2026-03-09T13:30:40.863 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 61/138 2026-03-09T13:30:40.863 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 62/138 2026-03-09T13:30:40.863 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : lua-5.4.4-4.el9.x86_64 63/138 2026-03-09T13:30:40.863 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 64/138 2026-03-09T13:30:40.863 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 65/138 2026-03-09T13:30:40.863 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 66/138 2026-03-09T13:30:40.863 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 67/138 2026-03-09T13:30:40.863 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 68/138 2026-03-09T13:30:40.863 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 69/138 2026-03-09T13:30:40.863 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jmespath-1.0.1-1.el9.noarch 70/138 2026-03-09T13:30:40.863 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 71/138 2026-03-09T13:30:40.863 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 72/138 2026-03-09T13:30:40.863 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 73/138 2026-03-09T13:30:40.863 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 74/138 2026-03-09T13:30:40.863 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 75/138 2026-03-09T13:30:40.863 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 76/138 2026-03-09T13:30:40.863 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 77/138 2026-03-09T13:30:40.863 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 78/138 2026-03-09T13:30:40.864 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 79/138 2026-03-09T13:30:40.864 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 80/138 2026-03-09T13:30:40.864 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 81/138 2026-03-09T13:30:40.864 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 82/138 2026-03-09T13:30:40.864 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 83/138 2026-03-09T13:30:40.864 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 84/138 2026-03-09T13:30:40.864 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 85/138 2026-03-09T13:30:40.864 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 86/138 2026-03-09T13:30:40.864 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 87/138 2026-03-09T13:30:40.864 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 88/138 2026-03-09T13:30:40.864 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 89/138 2026-03-09T13:30:40.864 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 90/138 2026-03-09T13:30:40.864 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 91/138 2026-03-09T13:30:40.864 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 92/138 2026-03-09T13:30:40.864 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 93/138 2026-03-09T13:30:40.864 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 94/138 2026-03-09T13:30:40.864 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 95/138 2026-03-09T13:30:40.865 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 96/138 2026-03-09T13:30:40.865 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 97/138 2026-03-09T13:30:40.865 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 98/138 2026-03-09T13:30:40.865 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 99/138 2026-03-09T13:30:40.865 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 100/138 2026-03-09T13:30:40.865 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 101/138 2026-03-09T13:30:40.865 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 102/138 2026-03-09T13:30:40.865 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 103/138 2026-03-09T13:30:40.865 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 104/138 2026-03-09T13:30:40.865 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 105/138 2026-03-09T13:30:40.865 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 106/138 2026-03-09T13:30:40.865 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 107/138 2026-03-09T13:30:40.865 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 108/138 2026-03-09T13:30:40.865 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 109/138 2026-03-09T13:30:40.865 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 110/138 2026-03-09T13:30:40.865 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 111/138 2026-03-09T13:30:40.865 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 112/138 2026-03-09T13:30:40.865 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 113/138 2026-03-09T13:30:40.865 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 114/138 2026-03-09T13:30:40.865 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 115/138 2026-03-09T13:30:40.865 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 116/138 2026-03-09T13:30:40.865 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 117/138 2026-03-09T13:30:40.865 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 118/138 2026-03-09T13:30:40.865 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 119/138 2026-03-09T13:30:40.865 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 120/138 2026-03-09T13:30:40.865 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 121/138 2026-03-09T13:30:40.865 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 122/138 2026-03-09T13:30:40.865 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 123/138 2026-03-09T13:30:40.865 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 124/138 2026-03-09T13:30:40.865 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 125/138 2026-03-09T13:30:40.865 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 126/138 2026-03-09T13:30:40.865 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 127/138 2026-03-09T13:30:40.865 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 128/138 2026-03-09T13:30:40.865 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 129/138 2026-03-09T13:30:40.865 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 130/138 2026-03-09T13:30:40.865 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-xmltodict-0.12.0-15.el9.noarch 131/138 2026-03-09T13:30:40.865 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 132/138 2026-03-09T13:30:40.865 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : re2-1:20211101-20.el9.x86_64 133/138 2026-03-09T13:30:40.865 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 134/138 2026-03-09T13:30:40.865 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 135/138 2026-03-09T13:30:40.865 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librados2-2:16.2.4-5.el9.x86_64 136/138 2026-03-09T13:30:40.865 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 137/138 2026-03-09T13:30:40.973 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librbd1-2:16.2.4-5.el9.x86_64 138/138 2026-03-09T13:30:40.973 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:30:40.973 INFO:teuthology.orchestra.run.vm04.stdout:Upgraded: 2026-03-09T13:30:40.973 INFO:teuthology.orchestra.run.vm04.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:30:40.973 INFO:teuthology.orchestra.run.vm04.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:30:40.973 INFO:teuthology.orchestra.run.vm04.stdout:Installed: 2026-03-09T13:30:40.973 INFO:teuthology.orchestra.run.vm04.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-09T13:30:40.973 INFO:teuthology.orchestra.run.vm04.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-09T13:30:40.973 INFO:teuthology.orchestra.run.vm04.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:30:40.973 INFO:teuthology.orchestra.run.vm04.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:30:40.973 INFO:teuthology.orchestra.run.vm04.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:30:40.973 INFO:teuthology.orchestra.run.vm04.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:30:40.973 INFO:teuthology.orchestra.run.vm04.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T13:30:40.973 INFO:teuthology.orchestra.run.vm04.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:30:40.973 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:30:40.973 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:30:40.973 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T13:30:40.973 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T13:30:40.973 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T13:30:40.973 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T13:30:40.973 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T13:30:40.973 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:30:40.973 INFO:teuthology.orchestra.run.vm04.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:30:40.973 INFO:teuthology.orchestra.run.vm04.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T13:30:40.973 INFO:teuthology.orchestra.run.vm04.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:30:40.973 INFO:teuthology.orchestra.run.vm04.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:30:40.973 INFO:teuthology.orchestra.run.vm04.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:30:40.973 INFO:teuthology.orchestra.run.vm04.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T13:30:40.973 INFO:teuthology.orchestra.run.vm04.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T13:30:40.973 INFO:teuthology.orchestra.run.vm04.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-09T13:30:40.973 INFO:teuthology.orchestra.run.vm04.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-09T13:30:40.973 INFO:teuthology.orchestra.run.vm04.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-09T13:30:40.973 INFO:teuthology.orchestra.run.vm04.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-09T13:30:40.973 INFO:teuthology.orchestra.run.vm04.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-09T13:30:40.973 INFO:teuthology.orchestra.run.vm04.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-09T13:30:40.973 INFO:teuthology.orchestra.run.vm04.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-09T13:30:40.973 INFO:teuthology.orchestra.run.vm04.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-09T13:30:40.973 INFO:teuthology.orchestra.run.vm04.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-09T13:30:40.973 INFO:teuthology.orchestra.run.vm04.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:30:40.973 INFO:teuthology.orchestra.run.vm04.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:30:40.973 INFO:teuthology.orchestra.run.vm04.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:30:40.973 INFO:teuthology.orchestra.run.vm04.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-09T13:30:40.973 INFO:teuthology.orchestra.run.vm04.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-09T13:30:40.973 INFO:teuthology.orchestra.run.vm04.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-09T13:30:40.973 INFO:teuthology.orchestra.run.vm04.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-09T13:30:40.973 INFO:teuthology.orchestra.run.vm04.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-09T13:30:40.973 INFO:teuthology.orchestra.run.vm04.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-09T13:30:40.973 INFO:teuthology.orchestra.run.vm04.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-09T13:30:40.973 INFO:teuthology.orchestra.run.vm04.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: libxslt-1.1.34-12.el9.x86_64 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: lua-5.4.4-4.el9.x86_64 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: mailcap-2.1.49-5.el9.noarch 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: python3-jmespath-1.0.1-1.el9.noarch 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: python3-ply-3.11-14.el9.noarch 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-09T13:30:40.974 INFO:teuthology.orchestra.run.vm04.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-09T13:30:40.975 INFO:teuthology.orchestra.run.vm04.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:30:40.975 INFO:teuthology.orchestra.run.vm04.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:30:40.975 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-09T13:30:40.975 INFO:teuthology.orchestra.run.vm04.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-09T13:30:40.975 INFO:teuthology.orchestra.run.vm04.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-09T13:30:40.975 INFO:teuthology.orchestra.run.vm04.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:30:40.975 INFO:teuthology.orchestra.run.vm04.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-09T13:30:40.975 INFO:teuthology.orchestra.run.vm04.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-09T13:30:40.975 INFO:teuthology.orchestra.run.vm04.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-09T13:30:40.975 INFO:teuthology.orchestra.run.vm04.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-09T13:30:40.975 INFO:teuthology.orchestra.run.vm04.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-09T13:30:40.975 INFO:teuthology.orchestra.run.vm04.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-09T13:30:40.975 INFO:teuthology.orchestra.run.vm04.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-09T13:30:40.975 INFO:teuthology.orchestra.run.vm04.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-09T13:30:40.975 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-09T13:30:40.975 INFO:teuthology.orchestra.run.vm04.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-09T13:30:40.975 INFO:teuthology.orchestra.run.vm04.stdout: python3-xmltodict-0.12.0-15.el9.noarch 2026-03-09T13:30:40.975 INFO:teuthology.orchestra.run.vm04.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-09T13:30:40.975 INFO:teuthology.orchestra.run.vm04.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-09T13:30:40.975 INFO:teuthology.orchestra.run.vm04.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-09T13:30:40.975 INFO:teuthology.orchestra.run.vm04.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-09T13:30:40.975 INFO:teuthology.orchestra.run.vm04.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:30:40.975 INFO:teuthology.orchestra.run.vm04.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:30:40.975 INFO:teuthology.orchestra.run.vm04.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:30:40.975 INFO:teuthology.orchestra.run.vm04.stdout: re2-1:20211101-20.el9.x86_64 2026-03-09T13:30:40.975 INFO:teuthology.orchestra.run.vm04.stdout: socat-1.7.4.1-8.el9.x86_64 2026-03-09T13:30:40.975 INFO:teuthology.orchestra.run.vm04.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-09T13:30:40.975 INFO:teuthology.orchestra.run.vm04.stdout: unzip-6.0-59.el9.x86_64 2026-03-09T13:30:40.975 INFO:teuthology.orchestra.run.vm04.stdout: xmlstarlet-1.6.1-20.el9.x86_64 2026-03-09T13:30:40.975 INFO:teuthology.orchestra.run.vm04.stdout: zip-3.0-35.el9.x86_64 2026-03-09T13:30:40.975 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:30:40.975 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T13:30:41.313 DEBUG:teuthology.parallel:result is None 2026-03-09T13:30:41.313 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T13:30:41.953 DEBUG:teuthology.orchestra.run.vm04:> rpm -q ceph --qf '%{VERSION}-%{RELEASE}' 2026-03-09T13:30:41.975 INFO:teuthology.orchestra.run.vm04.stdout:19.2.3-678.ge911bdeb.el9 2026-03-09T13:30:41.975 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678.ge911bdeb.el9 2026-03-09T13:30:41.975 INFO:teuthology.task.install:The correct ceph version 19.2.3-678.ge911bdeb is installed. 2026-03-09T13:30:41.976 INFO:teuthology.task.install.util:Shipping valgrind.supp... 2026-03-09T13:30:41.976 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T13:30:41.976 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-09T13:30:42.042 INFO:teuthology.task.install.util:Shipping 'daemon-helper'... 2026-03-09T13:30:42.042 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T13:30:42.042 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/usr/bin/daemon-helper 2026-03-09T13:30:42.106 DEBUG:teuthology.orchestra.run.vm04:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-09T13:30:42.170 INFO:teuthology.task.install.util:Shipping 'adjust-ulimits'... 2026-03-09T13:30:42.170 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T13:30:42.170 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-09T13:30:42.234 DEBUG:teuthology.orchestra.run.vm04:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-09T13:30:42.298 INFO:teuthology.task.install.util:Shipping 'stdin-killer'... 2026-03-09T13:30:42.298 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T13:30:42.298 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/usr/bin/stdin-killer 2026-03-09T13:30:42.361 DEBUG:teuthology.orchestra.run.vm04:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-09T13:30:42.424 INFO:teuthology.run_tasks:Running task cephadm... 2026-03-09T13:30:42.472 INFO:tasks.cephadm:Config: {'conf': {'global': {'mon election default strategy': 3}, 'mgr': {'debug mgr': 20, 'debug ms': 1, 'mgr/cephadm/use_agent': True}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20}, 'osd': {'debug ms': 1, 'debug osd': 20, 'osd mclock iops capacity threshold hdd': 49000}}, 'flavor': 'default', 'log-ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', 'CEPHADM_FAILED_DAEMON'], 'log-only-match': ['CEPHADM_'], 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'} 2026-03-09T13:30:42.472 INFO:tasks.cephadm:Cluster image is quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T13:30:42.473 INFO:tasks.cephadm:Cluster fsid is 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 2026-03-09T13:30:42.473 INFO:tasks.cephadm:Choosing monitor IPs and ports... 2026-03-09T13:30:42.473 INFO:tasks.cephadm:Monitor IPs: {'mon.a': '192.168.123.104'} 2026-03-09T13:30:42.473 INFO:tasks.cephadm:First mon is mon.a on vm04 2026-03-09T13:30:42.473 INFO:tasks.cephadm:First mgr is a 2026-03-09T13:30:42.473 INFO:tasks.cephadm:Normalizing hostnames... 2026-03-09T13:30:42.473 DEBUG:teuthology.orchestra.run.vm04:> sudo hostname $(hostname -s) 2026-03-09T13:30:42.496 INFO:tasks.cephadm:Downloading "compiled" cephadm from cachra 2026-03-09T13:30:42.496 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T13:30:43.120 INFO:tasks.cephadm:builder_project result: [{'url': 'https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/', 'chacra_url': 'https://3.chacra.ceph.com/repos/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/', 'ref': 'squid', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'distro': 'centos', 'distro_version': '9', 'distro_codename': None, 'modified': '2026-02-25 18:55:15.146628', 'status': 'ready', 'flavor': 'default', 'project': 'ceph', 'archs': ['source', 'x86_64'], 'extra': {'version': '19.2.3-678-ge911bdeb', 'package_manager_version': '19.2.3-678.ge911bdeb', 'build_url': 'https://jenkins.ceph.com/job/ceph-dev-pipeline/3275/', 'root_build_cause': '', 'node_name': '10.20.192.26+soko16', 'job_name': 'ceph-dev-pipeline'}}] 2026-03-09T13:30:43.820 INFO:tasks.util.chacra:got chacra host 3.chacra.ceph.com, ref squid, sha1 e911bdebe5c8faa3800735d1568fcdca65db60df from https://shaman.ceph.com/api/search/?project=ceph&distros=centos%2F9%2Fx86_64&flavor=default&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T13:30:43.821 INFO:tasks.cephadm:Discovered cachra url: https://3.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/x86_64/flavors/default/cephadm 2026-03-09T13:30:43.821 INFO:tasks.cephadm:Downloading cephadm from url: https://3.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/x86_64/flavors/default/cephadm 2026-03-09T13:30:43.821 DEBUG:teuthology.orchestra.run.vm04:> curl --silent -L https://3.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-09T13:30:45.276 INFO:teuthology.orchestra.run.vm04.stdout:-rw-r--r--. 1 ubuntu ubuntu 788355 Mar 9 13:30 /home/ubuntu/cephtest/cephadm 2026-03-09T13:30:45.276 DEBUG:teuthology.orchestra.run.vm04:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-09T13:30:45.297 INFO:tasks.cephadm:Pulling image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df on all hosts... 2026-03-09T13:30:45.297 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-09T13:30:45.503 INFO:teuthology.orchestra.run.vm04.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-09T13:32:16.473 INFO:teuthology.orchestra.run.vm04.stdout:{ 2026-03-09T13:32:16.473 INFO:teuthology.orchestra.run.vm04.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-09T13:32:16.473 INFO:teuthology.orchestra.run.vm04.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-09T13:32:16.473 INFO:teuthology.orchestra.run.vm04.stdout: "repo_digests": [ 2026-03-09T13:32:16.473 INFO:teuthology.orchestra.run.vm04.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-09T13:32:16.473 INFO:teuthology.orchestra.run.vm04.stdout: ] 2026-03-09T13:32:16.473 INFO:teuthology.orchestra.run.vm04.stdout:} 2026-03-09T13:32:16.493 DEBUG:teuthology.orchestra.run.vm04:> sudo mkdir -p /etc/ceph 2026-03-09T13:32:16.522 DEBUG:teuthology.orchestra.run.vm04:> sudo chmod 777 /etc/ceph 2026-03-09T13:32:16.589 INFO:tasks.cephadm:Writing seed config... 2026-03-09T13:32:16.590 INFO:tasks.cephadm: override: [global] mon election default strategy = 3 2026-03-09T13:32:16.590 INFO:tasks.cephadm: override: [mgr] debug mgr = 20 2026-03-09T13:32:16.590 INFO:tasks.cephadm: override: [mgr] debug ms = 1 2026-03-09T13:32:16.590 INFO:tasks.cephadm: override: [mgr] mgr/cephadm/use_agent = True 2026-03-09T13:32:16.590 INFO:tasks.cephadm: override: [mon] debug mon = 20 2026-03-09T13:32:16.590 INFO:tasks.cephadm: override: [mon] debug ms = 1 2026-03-09T13:32:16.590 INFO:tasks.cephadm: override: [mon] debug paxos = 20 2026-03-09T13:32:16.590 INFO:tasks.cephadm: override: [osd] debug ms = 1 2026-03-09T13:32:16.590 INFO:tasks.cephadm: override: [osd] debug osd = 20 2026-03-09T13:32:16.590 INFO:tasks.cephadm: override: [osd] osd mclock iops capacity threshold hdd = 49000 2026-03-09T13:32:16.591 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T13:32:16.591 DEBUG:teuthology.orchestra.run.vm04:> dd of=/home/ubuntu/cephtest/seed.ceph.conf 2026-03-09T13:32:16.645 DEBUG:tasks.cephadm:Final config: [global] # make logging friendly to teuthology log_to_file = true log_to_stderr = false log to journald = false mon cluster log to file = true mon cluster log file level = debug mon clock drift allowed = 1.000 # replicate across OSDs, not hosts osd crush chooseleaf type = 0 #osd pool default size = 2 osd pool default erasure code profile = plugin=jerasure technique=reed_sol_van k=2 m=1 crush-failure-domain=osd # enable some debugging auth debug = true ms die on old message = true ms die on bug = true debug asserts on shutdown = true # adjust warnings mon max pg per osd = 10000# >= luminous mon pg warn max object skew = 0 mon osd allow primary affinity = true mon osd allow pg remap = true mon warn on legacy crush tunables = false mon warn on crush straw calc version zero = false mon warn on no sortbitwise = false mon warn on osd down out interval zero = false mon warn on too few osds = false mon_warn_on_pool_pg_num_not_power_of_two = false # disable pg_autoscaler by default for new pools osd_pool_default_pg_autoscale_mode = off # tests delete pools mon allow pool delete = true fsid = 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 mon election default strategy = 3 [osd] osd scrub load threshold = 5.0 osd scrub max interval = 600 osd mclock profile = high_recovery_ops osd recover clone overlap = true osd recovery max chunk = 1048576 osd deep scrub update digest min age = 30 osd map max advance = 10 osd memory target autotune = true # debugging osd debug shutdown = true osd debug op order = true osd debug verify stray on activate = true osd debug pg log writeout = true osd debug verify cached snaps = true osd debug verify missing on start = true osd debug misdirected ops = true osd op queue = debug_random osd op queue cut off = debug_random osd shutdown pgref assert = true bdev debug aio = true osd sloppy crc = true debug ms = 1 debug osd = 20 osd mclock iops capacity threshold hdd = 49000 [mgr] mon reweight min pgs per osd = 4 mon reweight min bytes per osd = 10 mgr/telemetry/nag = false debug mgr = 20 debug ms = 1 mgr/cephadm/use_agent = True [mon] mon data avail warn = 5 mon mgr mkfs grace = 240 mon reweight min pgs per osd = 4 mon osd reporter subtree level = osd mon osd prime pg temp = true mon reweight min bytes per osd = 10 # rotate auth tickets quickly to exercise renewal paths auth mon ticket ttl = 660# 11m auth service ticket ttl = 240# 4m # don't complain about global id reclaim mon_warn_on_insecure_global_id_reclaim = false mon_warn_on_insecure_global_id_reclaim_allowed = false debug mon = 20 debug ms = 1 debug paxos = 20 [client.rgw] rgw cache enabled = true rgw enable ops log = true rgw enable usage log = true 2026-03-09T13:32:16.645 DEBUG:teuthology.orchestra.run.vm04:mon.a> sudo journalctl -f -n 0 -u ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@mon.a.service 2026-03-09T13:32:16.688 DEBUG:teuthology.orchestra.run.vm04:mgr.a> sudo journalctl -f -n 0 -u ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@mgr.a.service 2026-03-09T13:32:16.731 INFO:tasks.cephadm:Bootstrapping... 2026-03-09T13:32:16.731 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df -v bootstrap --fsid 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id a --orphan-initial-daemons --skip-monitoring-stack --mon-ip 192.168.123.104 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring 2026-03-09T13:32:16.873 INFO:teuthology.orchestra.run.vm04.stdout:-------------------------------------------------------------------------------- 2026-03-09T13:32:16.873 INFO:teuthology.orchestra.run.vm04.stdout:cephadm ['--image', 'quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df', '-v', 'bootstrap', '--fsid', '2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20', '--config', '/home/ubuntu/cephtest/seed.ceph.conf', '--output-config', '/etc/ceph/ceph.conf', '--output-keyring', '/etc/ceph/ceph.client.admin.keyring', '--output-pub-ssh-key', '/home/ubuntu/cephtest/ceph.pub', '--mon-id', 'a', '--mgr-id', 'a', '--orphan-initial-daemons', '--skip-monitoring-stack', '--mon-ip', '192.168.123.104', '--skip-admin-label'] 2026-03-09T13:32:16.874 INFO:teuthology.orchestra.run.vm04.stderr:Specifying an fsid for your cluster offers no advantages and may increase the likelihood of fsid conflicts. 2026-03-09T13:32:16.874 INFO:teuthology.orchestra.run.vm04.stdout:Verifying podman|docker is present... 2026-03-09T13:32:16.891 INFO:teuthology.orchestra.run.vm04.stdout:/bin/podman: stdout 5.8.0 2026-03-09T13:32:16.891 INFO:teuthology.orchestra.run.vm04.stdout:Verifying lvm2 is present... 2026-03-09T13:32:16.891 INFO:teuthology.orchestra.run.vm04.stdout:Verifying time synchronization is in place... 2026-03-09T13:32:16.898 INFO:teuthology.orchestra.run.vm04.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-09T13:32:16.898 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-09T13:32:16.903 INFO:teuthology.orchestra.run.vm04.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-09T13:32:16.903 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stdout inactive 2026-03-09T13:32:16.910 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stdout enabled 2026-03-09T13:32:16.917 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stdout active 2026-03-09T13:32:16.917 INFO:teuthology.orchestra.run.vm04.stdout:Unit chronyd.service is enabled and running 2026-03-09T13:32:16.917 INFO:teuthology.orchestra.run.vm04.stdout:Repeating the final host check... 2026-03-09T13:32:16.935 INFO:teuthology.orchestra.run.vm04.stdout:/bin/podman: stdout 5.8.0 2026-03-09T13:32:16.935 INFO:teuthology.orchestra.run.vm04.stdout:podman (/bin/podman) version 5.8.0 is present 2026-03-09T13:32:16.935 INFO:teuthology.orchestra.run.vm04.stdout:systemctl is present 2026-03-09T13:32:16.935 INFO:teuthology.orchestra.run.vm04.stdout:lvcreate is present 2026-03-09T13:32:16.941 INFO:teuthology.orchestra.run.vm04.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-09T13:32:16.941 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-09T13:32:16.946 INFO:teuthology.orchestra.run.vm04.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-09T13:32:16.946 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stdout inactive 2026-03-09T13:32:16.951 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stdout enabled 2026-03-09T13:32:16.956 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stdout active 2026-03-09T13:32:16.956 INFO:teuthology.orchestra.run.vm04.stdout:Unit chronyd.service is enabled and running 2026-03-09T13:32:16.956 INFO:teuthology.orchestra.run.vm04.stdout:Host looks OK 2026-03-09T13:32:16.956 INFO:teuthology.orchestra.run.vm04.stdout:Cluster fsid: 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 2026-03-09T13:32:16.956 INFO:teuthology.orchestra.run.vm04.stdout:Acquiring lock 140383263721504 on /run/cephadm/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20.lock 2026-03-09T13:32:16.957 INFO:teuthology.orchestra.run.vm04.stdout:Lock 140383263721504 acquired on /run/cephadm/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20.lock 2026-03-09T13:32:16.957 INFO:teuthology.orchestra.run.vm04.stdout:Verifying IP 192.168.123.104 port 3300 ... 2026-03-09T13:32:16.957 INFO:teuthology.orchestra.run.vm04.stdout:Verifying IP 192.168.123.104 port 6789 ... 2026-03-09T13:32:16.957 INFO:teuthology.orchestra.run.vm04.stdout:Base mon IP(s) is [192.168.123.104:3300, 192.168.123.104:6789], mon addrv is [v2:192.168.123.104:3300,v1:192.168.123.104:6789] 2026-03-09T13:32:16.960 INFO:teuthology.orchestra.run.vm04.stdout:/sbin/ip: stdout default via 192.168.123.1 dev eth0 proto dhcp src 192.168.123.104 metric 100 2026-03-09T13:32:16.960 INFO:teuthology.orchestra.run.vm04.stdout:/sbin/ip: stdout 192.168.123.0/24 dev eth0 proto kernel scope link src 192.168.123.104 metric 100 2026-03-09T13:32:16.962 INFO:teuthology.orchestra.run.vm04.stdout:/sbin/ip: stdout ::1 dev lo proto kernel metric 256 pref medium 2026-03-09T13:32:16.962 INFO:teuthology.orchestra.run.vm04.stdout:/sbin/ip: stdout fe80::/64 dev eth0 proto kernel metric 1024 pref medium 2026-03-09T13:32:16.964 INFO:teuthology.orchestra.run.vm04.stdout:/sbin/ip: stdout 1: lo: mtu 65536 state UNKNOWN qlen 1000 2026-03-09T13:32:16.964 INFO:teuthology.orchestra.run.vm04.stdout:/sbin/ip: stdout inet6 ::1/128 scope host 2026-03-09T13:32:16.964 INFO:teuthology.orchestra.run.vm04.stdout:/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-09T13:32:16.964 INFO:teuthology.orchestra.run.vm04.stdout:/sbin/ip: stdout 2: eth0: mtu 1500 state UP qlen 1000 2026-03-09T13:32:16.964 INFO:teuthology.orchestra.run.vm04.stdout:/sbin/ip: stdout inet6 fe80::5055:ff:fe00:4/64 scope link noprefixroute 2026-03-09T13:32:16.964 INFO:teuthology.orchestra.run.vm04.stdout:/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-09T13:32:16.965 INFO:teuthology.orchestra.run.vm04.stdout:Mon IP `192.168.123.104` is in CIDR network `192.168.123.0/24` 2026-03-09T13:32:16.965 INFO:teuthology.orchestra.run.vm04.stdout:Mon IP `192.168.123.104` is in CIDR network `192.168.123.0/24` 2026-03-09T13:32:16.965 INFO:teuthology.orchestra.run.vm04.stdout:Inferred mon public CIDR from local network configuration ['192.168.123.0/24', '192.168.123.0/24'] 2026-03-09T13:32:16.965 INFO:teuthology.orchestra.run.vm04.stdout:Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network 2026-03-09T13:32:16.965 INFO:teuthology.orchestra.run.vm04.stdout:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-09T13:32:18.393 INFO:teuthology.orchestra.run.vm04.stdout:/bin/podman: stdout 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c 2026-03-09T13:32:18.393 INFO:teuthology.orchestra.run.vm04.stdout:/bin/podman: stderr Trying to pull quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-09T13:32:18.393 INFO:teuthology.orchestra.run.vm04.stdout:/bin/podman: stderr Getting image source signatures 2026-03-09T13:32:18.393 INFO:teuthology.orchestra.run.vm04.stdout:/bin/podman: stderr Copying blob sha256:1752b8d01aa0dd33bbe0ab24e8316174c94fbdcd5d26252e2680bba0624747a7 2026-03-09T13:32:18.393 INFO:teuthology.orchestra.run.vm04.stdout:/bin/podman: stderr Copying blob sha256:8e380faede39ebd4286247457b408d979ab568aafd8389c42ec304b8cfba4e92 2026-03-09T13:32:18.393 INFO:teuthology.orchestra.run.vm04.stdout:/bin/podman: stderr Copying config sha256:654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c 2026-03-09T13:32:18.393 INFO:teuthology.orchestra.run.vm04.stdout:/bin/podman: stderr Writing manifest to image destination 2026-03-09T13:32:18.523 INFO:teuthology.orchestra.run.vm04.stdout:ceph: stdout ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-09T13:32:18.523 INFO:teuthology.orchestra.run.vm04.stdout:Ceph version: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-09T13:32:18.523 INFO:teuthology.orchestra.run.vm04.stdout:Extracting ceph user uid/gid from container image... 2026-03-09T13:32:18.675 INFO:teuthology.orchestra.run.vm04.stdout:stat: stdout 167 167 2026-03-09T13:32:18.675 INFO:teuthology.orchestra.run.vm04.stdout:Creating initial keys... 2026-03-09T13:32:18.801 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-authtool: stdout AQDiy65pV+ttLBAAirY0/hajAh8Ghqol+00hXA== 2026-03-09T13:32:18.899 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-authtool: stdout AQDiy65p8tNWNBAAC8z8Esvd7XPINbPN4Y/Zgg== 2026-03-09T13:32:19.019 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-authtool: stdout AQDiy65pDfOJOhAA1Ok6HGQnD2L1VDtJjLJHHw== 2026-03-09T13:32:19.019 INFO:teuthology.orchestra.run.vm04.stdout:Creating initial monmap... 2026-03-09T13:32:19.142 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-09T13:32:19.142 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/monmaptool: stdout setting min_mon_release = quincy 2026-03-09T13:32:19.142 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: set fsid to 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 2026-03-09T13:32:19.142 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-09T13:32:19.142 INFO:teuthology.orchestra.run.vm04.stdout:monmaptool for a [v2:192.168.123.104:3300,v1:192.168.123.104:6789] on /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-09T13:32:19.142 INFO:teuthology.orchestra.run.vm04.stdout:setting min_mon_release = quincy 2026-03-09T13:32:19.142 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/monmaptool: set fsid to 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 2026-03-09T13:32:19.142 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-09T13:32:19.142 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:32:19.143 INFO:teuthology.orchestra.run.vm04.stdout:Creating mon... 2026-03-09T13:32:19.279 INFO:teuthology.orchestra.run.vm04.stdout:create mon.a on 2026-03-09T13:32:19.455 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stderr Removed "/etc/systemd/system/multi-user.target.wants/ceph.target". 2026-03-09T13:32:19.591 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /etc/systemd/system/ceph.target. 2026-03-09T13:32:20.481 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20.target → /etc/systemd/system/ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20.target. 2026-03-09T13:32:20.481 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph.target.wants/ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20.target → /etc/systemd/system/ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20.target. 2026-03-09T13:32:20.821 INFO:teuthology.orchestra.run.vm04.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@mon.a 2026-03-09T13:32:20.821 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stderr Failed to reset failed state of unit ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@mon.a.service: Unit ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@mon.a.service not loaded. 2026-03-09T13:32:20.961 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20.target.wants/ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@mon.a.service → /etc/systemd/system/ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@.service. 2026-03-09T13:32:21.134 INFO:teuthology.orchestra.run.vm04.stdout:firewalld does not appear to be present 2026-03-09T13:32:21.134 INFO:teuthology.orchestra.run.vm04.stdout:Not possible to enable service . firewalld.service is not available 2026-03-09T13:32:21.134 INFO:teuthology.orchestra.run.vm04.stdout:Waiting for mon to start... 2026-03-09T13:32:21.134 INFO:teuthology.orchestra.run.vm04.stdout:Waiting for mon... 2026-03-09T13:32:21.363 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout cluster: 2026-03-09T13:32:21.364 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout id: 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 2026-03-09T13:32:21.364 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout health: HEALTH_OK 2026-03-09T13:32:21.364 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 2026-03-09T13:32:21.364 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout services: 2026-03-09T13:32:21.364 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mon: 1 daemons, quorum a (age 0.155012s) 2026-03-09T13:32:21.364 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mgr: no daemons active 2026-03-09T13:32:21.364 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout osd: 0 osds: 0 up, 0 in 2026-03-09T13:32:21.364 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 2026-03-09T13:32:21.364 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout data: 2026-03-09T13:32:21.364 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout pools: 0 pools, 0 pgs 2026-03-09T13:32:21.364 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout objects: 0 objects, 0 B 2026-03-09T13:32:21.364 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout usage: 0 B used, 0 B / 0 B avail 2026-03-09T13:32:21.364 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout pgs: 2026-03-09T13:32:21.364 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 2026-03-09T13:32:21.364 INFO:teuthology.orchestra.run.vm04.stdout:mon is available 2026-03-09T13:32:21.364 INFO:teuthology.orchestra.run.vm04.stdout:Assimilating anything we can from ceph.conf... 2026-03-09T13:32:21.579 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 2026-03-09T13:32:21.579 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout [global] 2026-03-09T13:32:21.579 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout fsid = 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 2026-03-09T13:32:21.579 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-09T13:32:21.579 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.104:3300,v1:192.168.123.104:6789] 2026-03-09T13:32:21.579 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-09T13:32:21.579 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-09T13:32:21.579 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-09T13:32:21.579 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-09T13:32:21.579 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 2026-03-09T13:32:21.579 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-09T13:32:21.579 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mgr/cephadm/use_agent = True 2026-03-09T13:32:21.579 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-09T13:32:21.579 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 2026-03-09T13:32:21.579 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout [osd] 2026-03-09T13:32:21.579 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-09T13:32:21.579 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-09T13:32:21.579 INFO:teuthology.orchestra.run.vm04.stdout:Generating new minimal ceph.conf... 2026-03-09T13:32:21.772 INFO:teuthology.orchestra.run.vm04.stdout:Restarting the monitor... 2026-03-09T13:32:22.002 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:21 vm04 systemd[1]: Starting Ceph mon.a for 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20... 2026-03-09T13:32:22.099 INFO:teuthology.orchestra.run.vm04.stdout:Setting public_network to 192.168.123.0/24 in mon config section 2026-03-09T13:32:22.272 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 podman[50149]: 2026-03-09 13:32:22.051206045 +0000 UTC m=+0.016990950 container create 82c193e3313360005f221cd2027e13cebc93695c43ad101a30c0f592c4a1f945 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mon-a, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0) 2026-03-09T13:32:22.272 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 podman[50149]: 2026-03-09 13:32:22.082181445 +0000 UTC m=+0.047966360 container init 82c193e3313360005f221cd2027e13cebc93695c43ad101a30c0f592c4a1f945 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mon-a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20260223, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-09T13:32:22.272 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 podman[50149]: 2026-03-09 13:32:22.086276372 +0000 UTC m=+0.052061277 container start 82c193e3313360005f221cd2027e13cebc93695c43ad101a30c0f592c4a1f945 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mon-a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, org.label-schema.build-date=20260223, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid) 2026-03-09T13:32:22.272 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 bash[50149]: 82c193e3313360005f221cd2027e13cebc93695c43ad101a30c0f592c4a1f945 2026-03-09T13:32:22.272 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 podman[50149]: 2026-03-09 13:32:22.044324803 +0000 UTC m=+0.010109708 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T13:32:22.272 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 systemd[1]: Started Ceph mon.a for 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20. 2026-03-09T13:32:22.272 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: set uid:gid to 167:167 (ceph:ceph) 2026-03-09T13:32:22.272 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 2 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: pidfile_write: ignore empty --pid-file 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: load: jerasure load: lrc 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: RocksDB version: 7.9.2 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Git sha 0 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Compile date 2026-02-25 18:11:04 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: DB SUMMARY 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: DB Session ID: QI5B0VGJTVTMEDVG37CY 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: CURRENT file: CURRENT 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: IDENTITY file: IDENTITY 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: MANIFEST file: MANIFEST-000010 size: 179 Bytes 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 1, files: 000008.sst 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 000009.log size: 75535 ; 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.error_if_exists: 0 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.create_if_missing: 0 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.paranoid_checks: 1 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.flush_verify_memtable_count: 1 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.env: 0x55c40419cdc0 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.fs: PosixFileSystem 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.info_log: 0x55c4056a6700 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.max_file_opening_threads: 16 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.statistics: (nil) 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.use_fsync: 0 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.max_log_file_size: 0 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.log_file_time_to_roll: 0 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.keep_log_file_num: 1000 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.recycle_log_file_num: 0 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.allow_fallocate: 1 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.allow_mmap_reads: 0 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.allow_mmap_writes: 0 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.use_direct_reads: 0 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.create_missing_column_families: 0 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.db_log_dir: 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.wal_dir: 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.table_cache_numshardbits: 6 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.WAL_ttl_seconds: 0 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.WAL_size_limit_MB: 0 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.is_fd_close_on_exec: 1 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.advise_random_on_open: 1 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.db_write_buffer_size: 0 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.write_buffer_manager: 0x55c4056ab900 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.use_adaptive_mutex: 0 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.rate_limiter: (nil) 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.wal_recovery_mode: 2 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.enable_thread_tracking: 0 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.enable_pipelined_write: 0 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.unordered_write: 0 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.row_cache: None 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.wal_filter: None 2026-03-09T13:32:22.273 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.allow_ingest_behind: 0 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.two_write_queues: 0 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.manual_wal_flush: 0 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.wal_compression: 0 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.atomic_flush: 0 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.persist_stats_to_disk: 0 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.write_dbid_to_manifest: 0 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.log_readahead_size: 0 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.best_efforts_recovery: 0 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.allow_data_in_errors: 0 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.db_host_id: __hostname__ 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.enforce_single_del_contracts: true 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.max_background_jobs: 2 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.max_background_compactions: -1 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.max_subcompactions: 1 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.delayed_write_rate : 16777216 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.max_total_wal_size: 0 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.stats_dump_period_sec: 600 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.stats_persist_period_sec: 600 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.max_open_files: -1 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.bytes_per_sync: 0 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.wal_bytes_per_sync: 0 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.strict_bytes_per_sync: 0 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.compaction_readahead_size: 0 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.max_background_flushes: -1 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Compression algorithms supported: 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: kZSTD supported: 0 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: kXpressCompression supported: 0 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: kBZip2Compression supported: 0 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: kLZ4Compression supported: 1 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: kZlibCompression supported: 1 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: kLZ4HCCompression supported: 1 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: kSnappyCompression supported: 1 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Fast CRC32 supported: Supported on x86 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: DMutex implementation: pthread_mutex_t 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.merge_operator: 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.compaction_filter: None 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.compaction_filter_factory: None 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.sst_partitioner_factory: None 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.memtable_factory: SkipListFactory 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.table_factory: BlockBasedTable 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4056a6640) 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout: cache_index_and_filter_blocks: 1 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout: pin_top_level_index_and_filter: 1 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout: index_type: 0 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout: data_block_index_type: 0 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout: index_shortening: 1 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout: data_block_hash_table_util_ratio: 0.750000 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout: checksum: 4 2026-03-09T13:32:22.274 INFO:journalctl@ceph.mon.a.vm04.stdout: no_block_cache: 0 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout: block_cache: 0x55c4056cb350 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout: block_cache_name: BinnedLRUCache 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout: block_cache_options: 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout: capacity : 536870912 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout: num_shard_bits : 4 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout: strict_capacity_limit : 0 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout: high_pri_pool_ratio: 0.000 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout: block_cache_compressed: (nil) 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout: persistent_cache: (nil) 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout: block_size: 4096 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout: block_size_deviation: 10 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout: block_restart_interval: 16 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout: index_block_restart_interval: 1 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout: metadata_block_size: 4096 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout: partition_filters: 0 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout: use_delta_encoding: 1 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout: filter_policy: bloomfilter 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout: whole_key_filtering: 1 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout: verify_compression: 0 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout: read_amp_bytes_per_bit: 0 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout: format_version: 5 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout: enable_index_compression: 1 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout: block_align: 0 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout: max_auto_readahead_size: 262144 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout: prepopulate_block_cache: 0 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout: initial_auto_readahead_size: 8192 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout: num_file_reads_for_auto_readahead: 2 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.write_buffer_size: 33554432 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.max_write_buffer_number: 2 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.compression: NoCompression 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.bottommost_compression: Disabled 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.prefix_extractor: nullptr 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.num_levels: 7 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.compression_opts.window_bits: -14 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.compression_opts.level: 32767 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.compression_opts.strategy: 0 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.compression_opts.enabled: false 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.target_file_size_base: 67108864 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.target_file_size_multiplier: 1 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-09T13:32:22.275 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.arena_block_size: 1048576 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.disable_auto_compactions: 0 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.inplace_update_support: 0 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.inplace_update_num_locks: 10000 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.memtable_huge_page_size: 0 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.bloom_locality: 0 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.max_successive_merges: 0 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.optimize_filters_for_hits: 0 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.paranoid_file_checks: 0 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.force_consistency_checks: 1 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.report_bg_io_stats: 0 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.ttl: 2592000 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.periodic_compaction_seconds: 0 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.enable_blob_files: false 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.min_blob_size: 0 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.blob_file_size: 268435456 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.blob_compression_type: NoCompression 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.enable_blob_garbage_collection: false 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.blob_file_starting_level: 0 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 5b5c20d2-62f0-470c-9b33-cc103d257294 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773063142113211, "job": 1, "event": "recovery_started", "wal_files": [9]} 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773063142118860, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 72616, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 225, "table_properties": {"data_size": 70895, "index_size": 174, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 517, "raw_key_size": 9705, "raw_average_key_size": 49, "raw_value_size": 65374, "raw_average_value_size": 333, "num_data_blocks": 8, "num_entries": 196, "num_filter_entries": 196, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773063142, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b5c20d2-62f0-470c-9b33-cc103d257294", "db_session_id": "QI5B0VGJTVTMEDVG37CY", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}} 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773063142118917, "job": 1, "event": "recovery_finished"} 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: [db/version_set.cc:5047] Creating manifest 15 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-a/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55c4056cce00 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: DB pointer 0x55c4057e2000 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout: ** DB Stats ** 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T13:32:22.276 INFO:journalctl@ceph.mon.a.vm04.stdout: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout: 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout: ** Compaction Stats [default] ** 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout: L0 2/0 72.77 KB 0.5 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 14.2 0.00 0.00 1 0.005 0 0 0.0 0.0 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout: Sum 2/0 72.77 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 14.2 0.00 0.00 1 0.005 0 0 0.0 0.0 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 14.2 0.00 0.00 1 0.005 0 0 0.0 0.0 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout: 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout: ** Compaction Stats [default] ** 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 14.2 0.00 0.00 1 0.005 0 0 0.0 0.0 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout: 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout: 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout: Flush(GB): cumulative 0.000, interval 0.000 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout: AddFile(Total Files): cumulative 0, interval 0 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout: AddFile(L0 Files): cumulative 0, interval 0 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout: AddFile(Keys): cumulative 0, interval 0 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout: Cumulative compaction: 0.00 GB write, 4.77 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout: Interval compaction: 0.00 GB write, 4.77 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout: Block cache BinnedLRUCache@0x55c4056cb350#2 capacity: 512.00 MB usage: 1.06 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 8e-06 secs_since: 0 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout: Block cache entry stats(count,size,portion): FilterBlock(2,0.70 KB,0.00013411%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%) 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout: 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout: ** File Read Latency Histogram By Level [default] ** 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: starting mon.a rank 0 at public addrs [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] at bind addrs [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon_data /var/lib/ceph/mon/ceph-a fsid 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: mon.a@-1(???) e1 preinit fsid 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: mon.a@-1(???).mds e0 Unable to load 'last_metadata' 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: mon.a@-1(???).mds e0 Unable to load 'last_metadata' 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: mon.a@-1(???).mds e1 new map 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: mon.a@-1(???).mds e1 print_map 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout: e1 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout: btime 2026-03-09T13:32:21:156384+0000 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout: enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout: legacy client fscid: -1 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout: 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout: No filesystems configured 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: mon.a@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: mon.a@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: mon.a@-1(???).mgr e0 loading version 1 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: mon.a@-1(???).mgr e1 active server: (0) 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: mon.a@-1(???).mgr e1 mkfs or daemon transitioned to available, loading commands 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: monmap epoch 1 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: fsid 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: last_changed 2026-03-09T13:32:19.093984+0000 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: created 2026-03-09T13:32:19.093984+0000 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: min_mon_release 19 (squid) 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: election_strategy: 1 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: 0: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.a 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: fsmap 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: osdmap e1: 0 total, 0 up, 0 in 2026-03-09T13:32:22.277 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-mon[50165]: mgrmap e1: no daemons active 2026-03-09T13:32:22.296 INFO:teuthology.orchestra.run.vm04.stdout:Wrote config to /etc/ceph/ceph.conf 2026-03-09T13:32:22.298 INFO:teuthology.orchestra.run.vm04.stdout:Wrote keyring to /etc/ceph/ceph.client.admin.keyring 2026-03-09T13:32:22.298 INFO:teuthology.orchestra.run.vm04.stdout:Creating mgr... 2026-03-09T13:32:22.298 INFO:teuthology.orchestra.run.vm04.stdout:Verifying port 0.0.0.0:9283 ... 2026-03-09T13:32:22.298 INFO:teuthology.orchestra.run.vm04.stdout:Verifying port 0.0.0.0:8765 ... 2026-03-09T13:32:22.443 INFO:teuthology.orchestra.run.vm04.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@mgr.a 2026-03-09T13:32:22.443 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stderr Failed to reset failed state of unit ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@mgr.a.service: Unit ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@mgr.a.service not loaded. 2026-03-09T13:32:22.570 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20.target.wants/ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@mgr.a.service → /etc/systemd/system/ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@.service. 2026-03-09T13:32:22.585 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:22 vm04 systemd[1]: Starting Ceph mgr.a for 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20... 2026-03-09T13:32:22.739 INFO:teuthology.orchestra.run.vm04.stdout:firewalld does not appear to be present 2026-03-09T13:32:22.739 INFO:teuthology.orchestra.run.vm04.stdout:Not possible to enable service . firewalld.service is not available 2026-03-09T13:32:22.739 INFO:teuthology.orchestra.run.vm04.stdout:firewalld does not appear to be present 2026-03-09T13:32:22.739 INFO:teuthology.orchestra.run.vm04.stdout:Not possible to open ports <[9283, 8765]>. firewalld.service is not available 2026-03-09T13:32:22.739 INFO:teuthology.orchestra.run.vm04.stdout:Waiting for mgr to start... 2026-03-09T13:32:22.739 INFO:teuthology.orchestra.run.vm04.stdout:Waiting for mgr... 2026-03-09T13:32:22.859 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:22 vm04 podman[50362]: 2026-03-09 13:32:22.680538807 +0000 UTC m=+0.017081490 container create 7649b74b64f9c2b1461ae0da14b272b83c9fe83e8aade6c992bf8a9f4cee2a43 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-09T13:32:22.860 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:22 vm04 podman[50362]: 2026-03-09 13:32:22.717175424 +0000 UTC m=+0.053718117 container init 7649b74b64f9c2b1461ae0da14b272b83c9fe83e8aade6c992bf8a9f4cee2a43 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True) 2026-03-09T13:32:22.860 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:22 vm04 podman[50362]: 2026-03-09 13:32:22.71989211 +0000 UTC m=+0.056434794 container start 7649b74b64f9c2b1461ae0da14b272b83c9fe83e8aade6c992bf8a9f4cee2a43 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-09T13:32:22.860 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:22 vm04 bash[50362]: 7649b74b64f9c2b1461ae0da14b272b83c9fe83e8aade6c992bf8a9f4cee2a43 2026-03-09T13:32:22.860 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:22 vm04 podman[50362]: 2026-03-09 13:32:22.673717686 +0000 UTC m=+0.010260390 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T13:32:22.860 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:22 vm04 systemd[1]: Started Ceph mgr.a for 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20. 2026-03-09T13:32:22.860 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:22.823+0000 7f966113d140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T13:32:22.931 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 2026-03-09T13:32:22.931 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout { 2026-03-09T13:32:22.931 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "fsid": "2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20", 2026-03-09T13:32:22.931 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "health": { 2026-03-09T13:32:22.931 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-09T13:32:22.931 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-09T13:32:22.931 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-09T13:32:22.931 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-09T13:32:22.931 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-09T13:32:22.931 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-09T13:32:22.931 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 0 2026-03-09T13:32:22.931 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout ], 2026-03-09T13:32:22.931 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-09T13:32:22.931 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "a" 2026-03-09T13:32:22.931 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout ], 2026-03-09T13:32:22.931 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "quorum_age": 0, 2026-03-09T13:32:22.931 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-09T13:32:22.931 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T13:32:22.931 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-09T13:32:22.931 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-09T13:32:22.931 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-09T13:32:22.931 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-09T13:32:22.932 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T13:32:22.932 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-09T13:32:22.932 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-09T13:32:22.932 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-09T13:32:22.932 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-09T13:32:22.932 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-09T13:32:22.932 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-09T13:32:22.932 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-09T13:32:22.932 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-09T13:32:22.932 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-09T13:32:22.932 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-09T13:32:22.932 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-09T13:32:22.932 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-09T13:32:22.932 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-09T13:32:22.932 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-09T13:32:22.932 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-09T13:32:22.932 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-09T13:32:22.932 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-09T13:32:22.932 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-09T13:32:22.932 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T13:32:22.932 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "btime": "2026-03-09T13:32:21:156384+0000", 2026-03-09T13:32:22.932 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-09T13:32:22.932 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-09T13:32:22.932 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-09T13:32:22.932 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-09T13:32:22.932 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-09T13:32:22.932 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-09T13:32:22.932 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-09T13:32:22.932 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-09T13:32:22.932 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-09T13:32:22.932 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "restful" 2026-03-09T13:32:22.932 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout ], 2026-03-09T13:32:22.932 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T13:32:22.932 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-09T13:32:22.932 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-09T13:32:22.932 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T13:32:22.932 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "modified": "2026-03-09T13:32:21.156966+0000", 2026-03-09T13:32:22.932 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T13:32:22.932 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-09T13:32:22.932 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-09T13:32:22.932 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout } 2026-03-09T13:32:22.932 INFO:teuthology.orchestra.run.vm04.stdout:mgr not available, waiting (1/15)... 2026-03-09T13:32:23.140 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:22 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:22.872+0000 7f966113d140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T13:32:23.620 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:23 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:23.289+0000 7f966113d140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T13:32:23.620 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:23 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/2362339455' entity='client.admin' 2026-03-09T13:32:23.620 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:23 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/368776226' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T13:32:23.891 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:23 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:23.619+0000 7f966113d140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T13:32:23.891 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:23 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T13:32:23.891 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:23 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T13:32:23.891 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:23 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: from numpy import show_config as show_numpy_config 2026-03-09T13:32:23.891 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:23 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:23.709+0000 7f966113d140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T13:32:23.891 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:23 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:23.746+0000 7f966113d140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T13:32:23.891 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:23 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:23.820+0000 7f966113d140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T13:32:24.583 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:24 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:24.322+0000 7f966113d140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T13:32:24.583 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:24 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:24.431+0000 7f966113d140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T13:32:24.583 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:24 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:24.470+0000 7f966113d140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T13:32:24.583 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:24 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:24.505+0000 7f966113d140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T13:32:24.583 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:24 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:24.546+0000 7f966113d140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T13:32:24.890 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:24 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:24.582+0000 7f966113d140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T13:32:24.891 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:24 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:24.745+0000 7f966113d140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T13:32:24.891 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:24 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:24.793+0000 7f966113d140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T13:32:25.155 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 2026-03-09T13:32:25.155 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout { 2026-03-09T13:32:25.155 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "fsid": "2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20", 2026-03-09T13:32:25.155 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "health": { 2026-03-09T13:32:25.155 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-09T13:32:25.155 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-09T13:32:25.155 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-09T13:32:25.155 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-09T13:32:25.155 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-09T13:32:25.155 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-09T13:32:25.155 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 0 2026-03-09T13:32:25.155 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout ], 2026-03-09T13:32:25.155 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-09T13:32:25.155 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "a" 2026-03-09T13:32:25.155 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout ], 2026-03-09T13:32:25.155 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "quorum_age": 2, 2026-03-09T13:32:25.156 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-09T13:32:25.156 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T13:32:25.156 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-09T13:32:25.156 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-09T13:32:25.156 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-09T13:32:25.156 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-09T13:32:25.156 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T13:32:25.156 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-09T13:32:25.156 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-09T13:32:25.156 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-09T13:32:25.156 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-09T13:32:25.156 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-09T13:32:25.156 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-09T13:32:25.156 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-09T13:32:25.156 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-09T13:32:25.156 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-09T13:32:25.156 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-09T13:32:25.156 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-09T13:32:25.156 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-09T13:32:25.156 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-09T13:32:25.156 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-09T13:32:25.156 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-09T13:32:25.156 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-09T13:32:25.156 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-09T13:32:25.156 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-09T13:32:25.156 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T13:32:25.156 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "btime": "2026-03-09T13:32:21:156384+0000", 2026-03-09T13:32:25.156 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-09T13:32:25.156 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-09T13:32:25.156 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-09T13:32:25.156 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-09T13:32:25.156 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-09T13:32:25.156 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-09T13:32:25.156 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-09T13:32:25.156 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-09T13:32:25.156 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-09T13:32:25.156 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "restful" 2026-03-09T13:32:25.156 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout ], 2026-03-09T13:32:25.156 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T13:32:25.157 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-09T13:32:25.157 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-09T13:32:25.157 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T13:32:25.157 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "modified": "2026-03-09T13:32:21.156966+0000", 2026-03-09T13:32:25.157 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T13:32:25.157 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-09T13:32:25.157 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-09T13:32:25.157 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout } 2026-03-09T13:32:25.157 INFO:teuthology.orchestra.run.vm04.stdout:mgr not available, waiting (2/15)... 2026-03-09T13:32:25.342 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:25 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:25.030+0000 7f966113d140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T13:32:25.343 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:25 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/1960054566' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T13:32:25.624 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:25 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:25.342+0000 7f966113d140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T13:32:25.624 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:25 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:25.378+0000 7f966113d140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T13:32:25.624 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:25 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:25.421+0000 7f966113d140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T13:32:25.624 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:25 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:25.505+0000 7f966113d140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T13:32:25.624 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:25 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:25.544+0000 7f966113d140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T13:32:25.891 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:25 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:25.624+0000 7f966113d140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T13:32:25.891 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:25 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:25.733+0000 7f966113d140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T13:32:25.891 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:25 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:25.873+0000 7f966113d140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T13:32:26.391 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:25 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:25.912+0000 7f966113d140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T13:32:26.391 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:26 vm04 ceph-mon[50165]: Activating manager daemon a 2026-03-09T13:32:26.391 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:26 vm04 ceph-mon[50165]: mgrmap e2: a(active, starting, since 0.00960835s) 2026-03-09T13:32:26.391 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:26 vm04 ceph-mon[50165]: from='mgr.14100 192.168.123.104:0/3722383967' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T13:32:26.391 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:26 vm04 ceph-mon[50165]: from='mgr.14100 192.168.123.104:0/3722383967' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T13:32:26.391 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:26 vm04 ceph-mon[50165]: from='mgr.14100 192.168.123.104:0/3722383967' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T13:32:26.391 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:26 vm04 ceph-mon[50165]: from='mgr.14100 192.168.123.104:0/3722383967' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T13:32:26.391 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:26 vm04 ceph-mon[50165]: from='mgr.14100 192.168.123.104:0/3722383967' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-09T13:32:26.391 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:26 vm04 ceph-mon[50165]: Manager daemon a is now available 2026-03-09T13:32:26.391 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:26 vm04 ceph-mon[50165]: from='mgr.14100 192.168.123.104:0/3722383967' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-09T13:32:26.391 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:26 vm04 ceph-mon[50165]: from='mgr.14100 192.168.123.104:0/3722383967' entity='mgr.a' 2026-03-09T13:32:26.391 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:26 vm04 ceph-mon[50165]: from='mgr.14100 192.168.123.104:0/3722383967' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-09T13:32:26.391 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:26 vm04 ceph-mon[50165]: from='mgr.14100 192.168.123.104:0/3722383967' entity='mgr.a' 2026-03-09T13:32:26.391 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:26 vm04 ceph-mon[50165]: from='mgr.14100 192.168.123.104:0/3722383967' entity='mgr.a' 2026-03-09T13:32:27.426 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 2026-03-09T13:32:27.426 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout { 2026-03-09T13:32:27.426 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "fsid": "2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20", 2026-03-09T13:32:27.426 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "health": { 2026-03-09T13:32:27.426 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-09T13:32:27.426 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-09T13:32:27.426 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-09T13:32:27.426 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-09T13:32:27.426 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 0 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout ], 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "a" 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout ], 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "quorum_age": 5, 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "btime": "2026-03-09T13:32:21:156384+0000", 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "restful" 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout ], 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "modified": "2026-03-09T13:32:21.156966+0000", 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout } 2026-03-09T13:32:27.427 INFO:teuthology.orchestra.run.vm04.stdout:mgr is available 2026-03-09T13:32:27.665 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 2026-03-09T13:32:27.665 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout [global] 2026-03-09T13:32:27.665 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout fsid = 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 2026-03-09T13:32:27.665 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-09T13:32:27.665 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.104:3300,v1:192.168.123.104:6789] 2026-03-09T13:32:27.665 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-09T13:32:27.665 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-09T13:32:27.665 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-09T13:32:27.665 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-09T13:32:27.665 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 2026-03-09T13:32:27.665 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-09T13:32:27.665 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-09T13:32:27.665 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 2026-03-09T13:32:27.665 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout [osd] 2026-03-09T13:32:27.665 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-09T13:32:27.665 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-09T13:32:27.665 INFO:teuthology.orchestra.run.vm04.stdout:Enabling cephadm module... 2026-03-09T13:32:28.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:27 vm04 ceph-mon[50165]: mgrmap e3: a(active, since 1.01511s) 2026-03-09T13:32:28.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:27 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/3269630985' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T13:32:28.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:27 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/3886733373' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-09T13:32:28.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:27 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/3886733373' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished 2026-03-09T13:32:28.904 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:28 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: ignoring --setuser ceph since I am not root 2026-03-09T13:32:28.905 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:28 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: ignoring --setgroup ceph since I am not root 2026-03-09T13:32:28.905 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:28 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:28.769+0000 7f1a19953140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T13:32:28.905 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:28 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:28.816+0000 7f1a19953140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T13:32:28.938 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout { 2026-03-09T13:32:28.938 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 4, 2026-03-09T13:32:28.938 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-09T13:32:28.938 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "active_name": "a", 2026-03-09T13:32:28.938 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-09T13:32:28.938 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout } 2026-03-09T13:32:28.938 INFO:teuthology.orchestra.run.vm04.stdout:Waiting for the mgr to restart... 2026-03-09T13:32:28.938 INFO:teuthology.orchestra.run.vm04.stdout:Waiting for mgr epoch 4... 2026-03-09T13:32:29.254 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:28 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/543671356' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-09T13:32:29.254 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:28 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/543671356' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-09T13:32:29.254 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:28 vm04 ceph-mon[50165]: mgrmap e4: a(active, since 2s) 2026-03-09T13:32:29.254 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:28 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/2181586937' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T13:32:29.578 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:29 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:29.253+0000 7f1a19953140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T13:32:29.578 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:29 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:29.578+0000 7f1a19953140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T13:32:29.891 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:29 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T13:32:29.891 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:29 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T13:32:29.891 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:29 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: from numpy import show_config as show_numpy_config 2026-03-09T13:32:29.891 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:29 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:29.661+0000 7f1a19953140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T13:32:29.891 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:29 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:29.700+0000 7f1a19953140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T13:32:29.891 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:29 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:29.770+0000 7f1a19953140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T13:32:30.525 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:30 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:30.262+0000 7f1a19953140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T13:32:30.525 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:30 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:30.372+0000 7f1a19953140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T13:32:30.525 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:30 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:30.410+0000 7f1a19953140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T13:32:30.525 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:30 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:30.446+0000 7f1a19953140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T13:32:30.525 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:30 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:30.488+0000 7f1a19953140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T13:32:30.525 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:30 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:30.524+0000 7f1a19953140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T13:32:30.891 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:30 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:30.697+0000 7f1a19953140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T13:32:30.891 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:30 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:30.747+0000 7f1a19953140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T13:32:31.254 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:30 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:30.975+0000 7f1a19953140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T13:32:31.254 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:31 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:31.253+0000 7f1a19953140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T13:32:31.528 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:31 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:31.289+0000 7f1a19953140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T13:32:31.529 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:31 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:31.331+0000 7f1a19953140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T13:32:31.529 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:31 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:31.411+0000 7f1a19953140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T13:32:31.529 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:31 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:31.449+0000 7f1a19953140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T13:32:31.529 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:31 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:31.528+0000 7f1a19953140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T13:32:31.810 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:31 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:31.638+0000 7f1a19953140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T13:32:31.811 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:31 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:31.774+0000 7f1a19953140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T13:32:31.811 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:31 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:31.810+0000 7f1a19953140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T13:32:32.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:31 vm04 ceph-mon[50165]: Active manager daemon a restarted 2026-03-09T13:32:32.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:31 vm04 ceph-mon[50165]: Activating manager daemon a 2026-03-09T13:32:32.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:31 vm04 ceph-mon[50165]: osdmap e2: 0 total, 0 up, 0 in 2026-03-09T13:32:32.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:31 vm04 ceph-mon[50165]: mgrmap e5: a(active, starting, since 0.0047587s) 2026-03-09T13:32:32.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:31 vm04 ceph-mon[50165]: from='mgr.14118 192.168.123.104:0/2755573318' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T13:32:32.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:31 vm04 ceph-mon[50165]: from='mgr.14118 192.168.123.104:0/2755573318' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-09T13:32:32.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:31 vm04 ceph-mon[50165]: from='mgr.14118 192.168.123.104:0/2755573318' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T13:32:32.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:31 vm04 ceph-mon[50165]: from='mgr.14118 192.168.123.104:0/2755573318' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T13:32:32.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:31 vm04 ceph-mon[50165]: from='mgr.14118 192.168.123.104:0/2755573318' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T13:32:32.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:31 vm04 ceph-mon[50165]: Manager daemon a is now available 2026-03-09T13:32:32.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:31 vm04 ceph-mon[50165]: from='mgr.14118 192.168.123.104:0/2755573318' entity='mgr.a' 2026-03-09T13:32:32.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:31 vm04 ceph-mon[50165]: from='mgr.14118 192.168.123.104:0/2755573318' entity='mgr.a' 2026-03-09T13:32:32.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:31 vm04 ceph-mon[50165]: from='mgr.14118 192.168.123.104:0/2755573318' entity='mgr.a' 2026-03-09T13:32:32.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:31 vm04 ceph-mon[50165]: from='mgr.14118 192.168.123.104:0/2755573318' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:32:32.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:31 vm04 ceph-mon[50165]: from='mgr.14118 192.168.123.104:0/2755573318' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:32:32.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:31 vm04 ceph-mon[50165]: from='mgr.14118 192.168.123.104:0/2755573318' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-09T13:32:32.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:31 vm04 ceph-mon[50165]: from='mgr.14118 192.168.123.104:0/2755573318' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-09T13:32:32.857 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout { 2026-03-09T13:32:32.857 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 6, 2026-03-09T13:32:32.857 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-09T13:32:32.857 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout } 2026-03-09T13:32:32.857 INFO:teuthology.orchestra.run.vm04.stdout:mgr epoch 4 is available 2026-03-09T13:32:32.857 INFO:teuthology.orchestra.run.vm04.stdout:Setting orchestrator backend to cephadm... 2026-03-09T13:32:33.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:32 vm04 ceph-mon[50165]: Found migration_current of "None". Setting to last migration. 2026-03-09T13:32:33.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:32 vm04 ceph-mon[50165]: mgrmap e6: a(active, since 1.00769s) 2026-03-09T13:32:33.457 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout value unchanged 2026-03-09T13:32:33.457 INFO:teuthology.orchestra.run.vm04.stdout:Generating ssh key... 2026-03-09T13:32:33.959 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:33 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: Generating public/private ed25519 key pair. 2026-03-09T13:32:33.959 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:33 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: Your identification has been saved in /tmp/tmpaxo0wj4n/key 2026-03-09T13:32:33.959 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:33 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: Your public key has been saved in /tmp/tmpaxo0wj4n/key.pub 2026-03-09T13:32:33.959 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:33 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: The key fingerprint is: 2026-03-09T13:32:33.959 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:33 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: SHA256:UOxW965/KIo8rmTIcQUbAfIHBn9gxggt+W94d7HwLBQ ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 2026-03-09T13:32:33.959 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:33 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: The key's randomart image is: 2026-03-09T13:32:33.959 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:33 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: +--[ED25519 256]--+ 2026-03-09T13:32:33.959 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:33 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: |.+o+B.+o. | 2026-03-09T13:32:33.959 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:33 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: |o oB.oE=. . . | 2026-03-09T13:32:33.959 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:33 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: | o o =o.. . . | 2026-03-09T13:32:33.959 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:33 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: | . ooo+ . | 2026-03-09T13:32:33.959 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:33 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: | o...=So . | 2026-03-09T13:32:33.959 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:33 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: | ..++o = . | 2026-03-09T13:32:33.959 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:33 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: | oo.oo . . | 2026-03-09T13:32:33.959 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:33 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: | o ... o . .| 2026-03-09T13:32:33.959 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:33 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: | .o+... o.. | 2026-03-09T13:32:33.959 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:33 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: +----[SHA256]-----+ 2026-03-09T13:32:33.959 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:33 vm04 ceph-mon[50165]: from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T13:32:33.960 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:33 vm04 ceph-mon[50165]: from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T13:32:33.960 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:33 vm04 ceph-mon[50165]: from='mgr.14118 192.168.123.104:0/2755573318' entity='mgr.a' 2026-03-09T13:32:33.960 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:33 vm04 ceph-mon[50165]: from='mgr.14118 192.168.123.104:0/2755573318' entity='mgr.a' 2026-03-09T13:32:33.960 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:33 vm04 ceph-mon[50165]: from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:32:33.960 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:33 vm04 ceph-mon[50165]: from='mgr.14118 192.168.123.104:0/2755573318' entity='mgr.a' 2026-03-09T13:32:33.960 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:33 vm04 ceph-mon[50165]: from='mgr.14118 192.168.123.104:0/2755573318' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:32:33.960 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:33 vm04 ceph-mon[50165]: from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:32:33.960 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:33 vm04 ceph-mon[50165]: [09/Mar/2026:13:32:33] ENGINE Bus STARTING 2026-03-09T13:32:33.960 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:33 vm04 ceph-mon[50165]: [09/Mar/2026:13:32:33] ENGINE Serving on http://192.168.123.104:8765 2026-03-09T13:32:33.960 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:33 vm04 ceph-mon[50165]: from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:32:33.960 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:33 vm04 ceph-mon[50165]: Generating ssh key... 2026-03-09T13:32:33.960 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:33 vm04 ceph-mon[50165]: from='mgr.14118 192.168.123.104:0/2755573318' entity='mgr.a' 2026-03-09T13:32:33.960 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:33 vm04 ceph-mon[50165]: from='mgr.14118 192.168.123.104:0/2755573318' entity='mgr.a' 2026-03-09T13:32:33.960 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:33 vm04 ceph-mon[50165]: from='mgr.14118 192.168.123.104:0/2755573318' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:32:33.997 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBw+czmzSxdyAs4J8mLRc2wOx78CkH3krni2d+0YdVlp ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 2026-03-09T13:32:33.997 INFO:teuthology.orchestra.run.vm04.stdout:Wrote public SSH key to /home/ubuntu/cephtest/ceph.pub 2026-03-09T13:32:33.997 INFO:teuthology.orchestra.run.vm04.stdout:Adding key to root@localhost authorized_keys... 2026-03-09T13:32:33.998 INFO:teuthology.orchestra.run.vm04.stdout:Adding host vm04... 2026-03-09T13:32:35.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:34 vm04 ceph-mon[50165]: [09/Mar/2026:13:32:33] ENGINE Serving on https://192.168.123.104:7150 2026-03-09T13:32:35.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:34 vm04 ceph-mon[50165]: [09/Mar/2026:13:32:33] ENGINE Bus STARTED 2026-03-09T13:32:35.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:34 vm04 ceph-mon[50165]: [09/Mar/2026:13:32:33] ENGINE Client ('192.168.123.104', 59028) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T13:32:35.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:34 vm04 ceph-mon[50165]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:32:35.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:34 vm04 ceph-mon[50165]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm04", "addr": "192.168.123.104", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:32:35.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:34 vm04 ceph-mon[50165]: Deploying cephadm binary to vm04 2026-03-09T13:32:35.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:34 vm04 ceph-mon[50165]: mgrmap e7: a(active, since 2s) 2026-03-09T13:32:35.787 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout Added host 'vm04' with addr '192.168.123.104' 2026-03-09T13:32:35.787 INFO:teuthology.orchestra.run.vm04.stdout:Deploying unmanaged mon service... 2026-03-09T13:32:36.099 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout Scheduled mon update... 2026-03-09T13:32:36.099 INFO:teuthology.orchestra.run.vm04.stdout:Deploying unmanaged mgr service... 2026-03-09T13:32:36.380 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout Scheduled mgr update... 2026-03-09T13:32:36.882 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:36 vm04 ceph-mon[50165]: from='mgr.14118 192.168.123.104:0/2755573318' entity='mgr.a' 2026-03-09T13:32:36.883 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:36 vm04 ceph-mon[50165]: Added host vm04 2026-03-09T13:32:36.883 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:36 vm04 ceph-mon[50165]: from='mgr.14118 192.168.123.104:0/2755573318' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:32:36.883 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:36 vm04 ceph-mon[50165]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:32:36.883 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:36 vm04 ceph-mon[50165]: Saving service mon spec with placement count:5 2026-03-09T13:32:36.883 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:36 vm04 ceph-mon[50165]: from='mgr.14118 192.168.123.104:0/2755573318' entity='mgr.a' 2026-03-09T13:32:36.883 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:36 vm04 ceph-mon[50165]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:32:36.883 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:36 vm04 ceph-mon[50165]: Saving service mgr spec with placement count:2 2026-03-09T13:32:36.883 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:36 vm04 ceph-mon[50165]: from='mgr.14118 192.168.123.104:0/2755573318' entity='mgr.a' 2026-03-09T13:32:36.883 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:36 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/2551947837' entity='client.admin' 2026-03-09T13:32:36.926 INFO:teuthology.orchestra.run.vm04.stdout:Enabling the dashboard module... 2026-03-09T13:32:38.059 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:37 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/3449055261' entity='client.admin' 2026-03-09T13:32:38.059 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:37 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/4227320709' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-09T13:32:38.059 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:37 vm04 ceph-mon[50165]: from='mgr.14118 192.168.123.104:0/2755573318' entity='mgr.a' 2026-03-09T13:32:38.059 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:37 vm04 ceph-mon[50165]: from='mgr.14118 192.168.123.104:0/2755573318' entity='mgr.a' 2026-03-09T13:32:38.059 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:37 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: ignoring --setuser ceph since I am not root 2026-03-09T13:32:38.059 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:37 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: ignoring --setgroup ceph since I am not root 2026-03-09T13:32:38.260 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout { 2026-03-09T13:32:38.261 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 8, 2026-03-09T13:32:38.261 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-09T13:32:38.261 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "active_name": "a", 2026-03-09T13:32:38.261 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-09T13:32:38.261 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout } 2026-03-09T13:32:38.261 INFO:teuthology.orchestra.run.vm04.stdout:Waiting for the mgr to restart... 2026-03-09T13:32:38.261 INFO:teuthology.orchestra.run.vm04.stdout:Waiting for mgr epoch 8... 2026-03-09T13:32:38.323 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:38 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:38.073+0000 7f7e0e668140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T13:32:38.323 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:38 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:38.118+0000 7f7e0e668140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T13:32:38.562 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:38 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:38.562+0000 7f7e0e668140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T13:32:39.141 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:38 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:38.886+0000 7f7e0e668140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T13:32:39.141 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:38 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T13:32:39.141 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:38 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T13:32:39.141 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:38 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: from numpy import show_config as show_numpy_config 2026-03-09T13:32:39.141 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:38 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:38.972+0000 7f7e0e668140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T13:32:39.141 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:39 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:39.008+0000 7f7e0e668140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T13:32:39.141 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:39 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:39.075+0000 7f7e0e668140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T13:32:39.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:38 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/4227320709' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-09T13:32:39.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:38 vm04 ceph-mon[50165]: mgrmap e8: a(active, since 6s) 2026-03-09T13:32:39.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:38 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/3176313413' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T13:32:39.818 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:39 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:39.561+0000 7f7e0e668140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T13:32:39.818 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:39 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:39.669+0000 7f7e0e668140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T13:32:39.818 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:39 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:39.709+0000 7f7e0e668140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T13:32:39.819 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:39 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:39.743+0000 7f7e0e668140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T13:32:39.819 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:39 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:39.782+0000 7f7e0e668140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T13:32:40.141 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:39 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:39.818+0000 7f7e0e668140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T13:32:40.141 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:39 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:39.998+0000 7f7e0e668140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T13:32:40.141 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:40 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:40.051+0000 7f7e0e668140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T13:32:40.557 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:40 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:40.287+0000 7f7e0e668140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T13:32:40.821 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:40 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:40.557+0000 7f7e0e668140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T13:32:40.821 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:40 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:40.592+0000 7f7e0e668140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T13:32:40.821 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:40 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:40.635+0000 7f7e0e668140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T13:32:40.821 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:40 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:40.709+0000 7f7e0e668140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T13:32:40.821 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:40 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:40.744+0000 7f7e0e668140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T13:32:41.103 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:40 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:40.820+0000 7f7e0e668140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T13:32:41.103 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:40 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:40.938+0000 7f7e0e668140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T13:32:41.103 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:41 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:41.067+0000 7f7e0e668140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T13:32:41.103 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:32:41 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[50372]: 2026-03-09T13:32:41.102+0000 7f7e0e668140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T13:32:41.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:41 vm04 ceph-mon[50165]: Active manager daemon a restarted 2026-03-09T13:32:41.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:41 vm04 ceph-mon[50165]: Activating manager daemon a 2026-03-09T13:32:41.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:41 vm04 ceph-mon[50165]: osdmap e3: 0 total, 0 up, 0 in 2026-03-09T13:32:41.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:41 vm04 ceph-mon[50165]: mgrmap e9: a(active, starting, since 0.232139s) 2026-03-09T13:32:41.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:41 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T13:32:41.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:41 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-09T13:32:41.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:41 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T13:32:41.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:41 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T13:32:41.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:41 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T13:32:41.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:41 vm04 ceph-mon[50165]: Manager daemon a is now available 2026-03-09T13:32:41.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:41 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:32:41.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:41 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:32:42.424 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout { 2026-03-09T13:32:42.424 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 10, 2026-03-09T13:32:42.424 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-09T13:32:42.424 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout } 2026-03-09T13:32:42.424 INFO:teuthology.orchestra.run.vm04.stdout:mgr epoch 8 is available 2026-03-09T13:32:42.424 INFO:teuthology.orchestra.run.vm04.stdout:Generating a dashboard self-signed certificate... 2026-03-09T13:32:42.699 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:42 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-09T13:32:42.699 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:42 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-09T13:32:42.699 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:42 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:32:42.699 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:42 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T13:32:42.699 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:42 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:32:42.699 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:42 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "client.agent.vm04", "caps": []}]: dispatch 2026-03-09T13:32:42.699 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:42 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd='[{"prefix": "auth get-or-create", "entity": "client.agent.vm04", "caps": []}]': finished 2026-03-09T13:32:42.782 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout Self-signed certificate created 2026-03-09T13:32:42.782 INFO:teuthology.orchestra.run.vm04.stdout:Creating initial admin user... 2026-03-09T13:32:43.202 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout {"username": "admin", "password": "$2b$12$mySyz8fqHW4MUAJcB.jR3eVudzUtFTGtda2WqX.YmSjvnT7kT.5bG", "roles": ["administrator"], "name": null, "email": null, "lastUpdate": 1773063163, "enabled": true, "pwdExpirationDate": null, "pwdUpdateRequired": true} 2026-03-09T13:32:43.202 INFO:teuthology.orchestra.run.vm04.stdout:Fetching dashboard port number... 2026-03-09T13:32:43.423 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:43 vm04 ceph-mon[50165]: Deploying daemon agent.vm04 on vm04 2026-03-09T13:32:43.423 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:43 vm04 ceph-mon[50165]: [09/Mar/2026:13:32:42] ENGINE Bus STARTING 2026-03-09T13:32:43.424 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:43 vm04 ceph-mon[50165]: [09/Mar/2026:13:32:42] ENGINE Serving on https://192.168.123.104:7150 2026-03-09T13:32:43.424 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:43 vm04 ceph-mon[50165]: [09/Mar/2026:13:32:42] ENGINE Client ('192.168.123.104', 39748) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T13:32:43.424 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:43 vm04 ceph-mon[50165]: mgrmap e10: a(active, since 1.2838s) 2026-03-09T13:32:43.424 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:43 vm04 ceph-mon[50165]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T13:32:43.424 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:43 vm04 ceph-mon[50165]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T13:32:43.424 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:43 vm04 ceph-mon[50165]: [09/Mar/2026:13:32:42] ENGINE Serving on http://192.168.123.104:8765 2026-03-09T13:32:43.424 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:43 vm04 ceph-mon[50165]: [09/Mar/2026:13:32:42] ENGINE Bus STARTED 2026-03-09T13:32:43.424 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:43 vm04 ceph-mon[50165]: from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:32:43.424 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:43 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:32:43.424 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:43 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:32:43.424 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:43 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:32:43.475 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 8443 2026-03-09T13:32:43.475 INFO:teuthology.orchestra.run.vm04.stdout:firewalld does not appear to be present 2026-03-09T13:32:43.475 INFO:teuthology.orchestra.run.vm04.stdout:Not possible to open ports <[8443]>. firewalld.service is not available 2026-03-09T13:32:43.477 INFO:teuthology.orchestra.run.vm04.stdout:Ceph Dashboard is now available at: 2026-03-09T13:32:43.477 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:32:43.477 INFO:teuthology.orchestra.run.vm04.stdout: URL: https://vm04.local:8443/ 2026-03-09T13:32:43.477 INFO:teuthology.orchestra.run.vm04.stdout: User: admin 2026-03-09T13:32:43.477 INFO:teuthology.orchestra.run.vm04.stdout: Password: rqsu0tnmzx 2026-03-09T13:32:43.477 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:32:43.477 INFO:teuthology.orchestra.run.vm04.stdout:Saving cluster configuration to /var/lib/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/config directory 2026-03-09T13:32:43.923 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr set mgr/dashboard/cluster/status 2026-03-09T13:32:43.923 INFO:teuthology.orchestra.run.vm04.stdout:You can access the Ceph CLI as following in case of multi-cluster or non-default config: 2026-03-09T13:32:43.923 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:32:43.923 INFO:teuthology.orchestra.run.vm04.stdout: sudo /home/ubuntu/cephtest/cephadm shell --fsid 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring 2026-03-09T13:32:43.923 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:32:43.923 INFO:teuthology.orchestra.run.vm04.stdout:Or, if you are only running a single cluster on this host: 2026-03-09T13:32:43.923 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:32:43.923 INFO:teuthology.orchestra.run.vm04.stdout: sudo /home/ubuntu/cephtest/cephadm shell 2026-03-09T13:32:43.923 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:32:43.924 INFO:teuthology.orchestra.run.vm04.stdout:Please consider enabling telemetry to help improve Ceph: 2026-03-09T13:32:43.924 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:32:43.924 INFO:teuthology.orchestra.run.vm04.stdout: ceph telemetry on 2026-03-09T13:32:43.924 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:32:43.924 INFO:teuthology.orchestra.run.vm04.stdout:For more information see: 2026-03-09T13:32:43.924 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:32:43.924 INFO:teuthology.orchestra.run.vm04.stdout: https://docs.ceph.com/en/latest/mgr/telemetry/ 2026-03-09T13:32:43.924 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:32:43.924 INFO:teuthology.orchestra.run.vm04.stdout:Bootstrap complete. 2026-03-09T13:32:43.962 INFO:tasks.cephadm:Fetching config... 2026-03-09T13:32:43.962 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T13:32:43.962 DEBUG:teuthology.orchestra.run.vm04:> dd if=/etc/ceph/ceph.conf of=/dev/stdout 2026-03-09T13:32:43.996 INFO:tasks.cephadm:Fetching client.admin keyring... 2026-03-09T13:32:43.996 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T13:32:43.996 DEBUG:teuthology.orchestra.run.vm04:> dd if=/etc/ceph/ceph.client.admin.keyring of=/dev/stdout 2026-03-09T13:32:44.054 INFO:tasks.cephadm:Fetching mon keyring... 2026-03-09T13:32:44.054 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T13:32:44.054 DEBUG:teuthology.orchestra.run.vm04:> sudo dd if=/var/lib/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/mon.a/keyring of=/dev/stdout 2026-03-09T13:32:44.248 INFO:tasks.cephadm:Fetching pub ssh key... 2026-03-09T13:32:44.248 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T13:32:44.248 DEBUG:teuthology.orchestra.run.vm04:> dd if=/home/ubuntu/cephtest/ceph.pub of=/dev/stdout 2026-03-09T13:32:44.310 INFO:tasks.cephadm:Installing pub ssh key for root users... 2026-03-09T13:32:44.310 DEBUG:teuthology.orchestra.run.vm04:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBw+czmzSxdyAs4J8mLRc2wOx78CkH3krni2d+0YdVlp ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-09T13:32:44.399 INFO:teuthology.orchestra.run.vm04.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBw+czmzSxdyAs4J8mLRc2wOx78CkH3krni2d+0YdVlp ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 2026-03-09T13:32:44.422 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 -- ceph config set mgr mgr/cephadm/allow_ptrace true 2026-03-09T13:32:44.642 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:44 vm04 ceph-mon[50165]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:32:44.642 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:44 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/1830662534' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-09T13:32:44.642 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:44 vm04 ceph-mon[50165]: mgrmap e11: a(active, since 2s) 2026-03-09T13:32:44.642 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:44 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/3109939310' entity='client.admin' 2026-03-09T13:32:44.642 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:44 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:32:44.734 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/mon.a/config 2026-03-09T13:32:45.136 INFO:tasks.cephadm:Distributing conf and client.admin keyring to all hosts + 0755 2026-03-09T13:32:45.136 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 -- ceph orch client-keyring set client.admin '*' --mode 0755 2026-03-09T13:32:45.435 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/mon.a/config 2026-03-09T13:32:45.474 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:45 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:32:45.474 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:45 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:32:45.474 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:45 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:32:45.474 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:45 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:32:45.474 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:45 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:32:45.475 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:45 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:32:45.475 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:45 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:32:45.475 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:45 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:32:45.475 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:45 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:32:45.475 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:45 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:32:45.475 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:45 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:32:45.475 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:45 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/3020473954' entity='client.admin' 2026-03-09T13:32:45.475 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:45 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:32:45.475 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:45 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:32:45.878 INFO:tasks.cephadm:Setting crush tunables to default 2026-03-09T13:32:45.878 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 -- ceph osd crush tunables default 2026-03-09T13:32:46.166 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/mon.a/config 2026-03-09T13:32:46.640 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:46 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:32:46.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:46 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:32:46.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:46 vm04 ceph-mon[50165]: from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:32:46.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:46 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:32:46.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:46 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:32:46.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:46 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:32:46.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:46 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:32:46.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:46 vm04 ceph-mon[50165]: Updating vm04:/etc/ceph/ceph.conf 2026-03-09T13:32:46.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:46 vm04 ceph-mon[50165]: Updating vm04:/var/lib/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/config/ceph.conf 2026-03-09T13:32:46.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:46 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:32:46.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:46 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:32:46.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:46 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:32:47.425 INFO:teuthology.orchestra.run.vm04.stderr:adjusted tunables profile to default 2026-03-09T13:32:47.489 INFO:tasks.cephadm:Adding mon.a on vm04 2026-03-09T13:32:47.489 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 -- ceph orch apply mon '1;vm04:192.168.123.104=a' 2026-03-09T13:32:47.668 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/mon.a/config 2026-03-09T13:32:47.692 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:47 vm04 ceph-mon[50165]: Updating vm04:/etc/ceph/ceph.client.admin.keyring 2026-03-09T13:32:47.693 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:47 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/3545517854' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-09T13:32:47.693 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:47 vm04 ceph-mon[50165]: Updating vm04:/var/lib/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/config/ceph.client.admin.keyring 2026-03-09T13:32:47.693 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:47 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:32:47.693 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:47 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:32:47.693 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:47 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:32:47.943 INFO:teuthology.orchestra.run.vm04.stdout:Scheduled mon update... 2026-03-09T13:32:48.015 INFO:tasks.cephadm:Waiting for 1 mons in monmap... 2026-03-09T13:32:48.015 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 -- ceph mon dump -f json 2026-03-09T13:32:48.282 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/mon.a/config 2026-03-09T13:32:48.559 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:48 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/3545517854' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-09T13:32:48.559 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:48 vm04 ceph-mon[50165]: osdmap e4: 0 total, 0 up, 0 in 2026-03-09T13:32:48.559 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:48 vm04 ceph-mon[50165]: mgrmap e12: a(active, since 6s) 2026-03-09T13:32:48.559 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:48 vm04 ceph-mon[50165]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "1;vm04:192.168.123.104=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:32:48.559 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:48 vm04 ceph-mon[50165]: Saving service mon spec with placement vm04:192.168.123.104=a;count:1 2026-03-09T13:32:48.559 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:48 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:32:48.559 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:48 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:32:48.559 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:48 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:32:48.560 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:48 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:32:48.560 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:48 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:32:48.560 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:48 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:32:48.560 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:48 vm04 ceph-mon[50165]: Reconfiguring mon.a (unknown last config time)... 2026-03-09T13:32:48.560 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:48 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T13:32:48.560 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:48 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T13:32:48.560 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:48 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:32:48.560 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:48 vm04 ceph-mon[50165]: Reconfiguring daemon mon.a on vm04 2026-03-09T13:32:48.560 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:48 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:32:48.560 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:48 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:32:48.560 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:32:48.560 INFO:teuthology.orchestra.run.vm04.stdout:{"epoch":1,"fsid":"2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20","modified":"2026-03-09T13:32:19.093984Z","created":"2026-03-09T13:32:19.093984Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:3300","nonce":0},{"type":"v1","addr":"192.168.123.104:6789","nonce":0}]},"addr":"192.168.123.104:6789/0","public_addr":"192.168.123.104:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T13:32:48.560 INFO:teuthology.orchestra.run.vm04.stderr:dumped monmap epoch 1 2026-03-09T13:32:48.637 INFO:tasks.cephadm:Generating final ceph.conf file... 2026-03-09T13:32:48.637 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 -- ceph config generate-minimal-conf 2026-03-09T13:32:48.803 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/mon.a/config 2026-03-09T13:32:49.127 INFO:teuthology.orchestra.run.vm04.stdout:# minimal ceph.conf for 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 2026-03-09T13:32:49.127 INFO:teuthology.orchestra.run.vm04.stdout:[global] 2026-03-09T13:32:49.127 INFO:teuthology.orchestra.run.vm04.stdout: fsid = 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 2026-03-09T13:32:49.127 INFO:teuthology.orchestra.run.vm04.stdout: mon_host = [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] 2026-03-09T13:32:49.189 INFO:tasks.cephadm:Distributing (final) config and client.admin keyring... 2026-03-09T13:32:49.190 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T13:32:49.190 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/etc/ceph/ceph.conf 2026-03-09T13:32:49.218 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T13:32:49.218 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-09T13:32:49.285 INFO:tasks.cephadm:Adding mgr.a on vm04 2026-03-09T13:32:49.285 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 -- ceph orch apply mgr '1;vm04=a' 2026-03-09T13:32:49.425 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:49 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/4122368266' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T13:32:49.425 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:49 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/3975031603' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:32:49.522 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/mon.a/config 2026-03-09T13:32:49.854 INFO:teuthology.orchestra.run.vm04.stdout:Scheduled mgr update... 2026-03-09T13:32:49.926 INFO:tasks.cephadm:Deploying OSDs... 2026-03-09T13:32:49.926 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T13:32:49.926 DEBUG:teuthology.orchestra.run.vm04:> dd if=/scratch_devs of=/dev/stdout 2026-03-09T13:32:49.946 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T13:32:49.946 DEBUG:teuthology.orchestra.run.vm04:> ls /dev/[sv]d? 2026-03-09T13:32:50.005 INFO:teuthology.orchestra.run.vm04.stdout:/dev/vda 2026-03-09T13:32:50.005 INFO:teuthology.orchestra.run.vm04.stdout:/dev/vdb 2026-03-09T13:32:50.005 INFO:teuthology.orchestra.run.vm04.stdout:/dev/vdc 2026-03-09T13:32:50.005 INFO:teuthology.orchestra.run.vm04.stdout:/dev/vdd 2026-03-09T13:32:50.005 INFO:teuthology.orchestra.run.vm04.stdout:/dev/vde 2026-03-09T13:32:50.005 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-09T13:32:50.005 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-09T13:32:50.005 DEBUG:teuthology.orchestra.run.vm04:> stat /dev/vdb 2026-03-09T13:32:50.071 INFO:teuthology.orchestra.run.vm04.stdout: File: /dev/vdb 2026-03-09T13:32:50.071 INFO:teuthology.orchestra.run.vm04.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-09T13:32:50.071 INFO:teuthology.orchestra.run.vm04.stdout:Device: 6h/6d Inode: 221 Links: 1 Device type: fc,10 2026-03-09T13:32:50.072 INFO:teuthology.orchestra.run.vm04.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T13:32:50.072 INFO:teuthology.orchestra.run.vm04.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-09T13:32:50.072 INFO:teuthology.orchestra.run.vm04.stdout:Access: 2026-03-09 13:32:46.018322050 +0000 2026-03-09T13:32:50.072 INFO:teuthology.orchestra.run.vm04.stdout:Modify: 2026-03-09 13:29:43.402339258 +0000 2026-03-09T13:32:50.072 INFO:teuthology.orchestra.run.vm04.stdout:Change: 2026-03-09 13:29:43.402339258 +0000 2026-03-09T13:32:50.072 INFO:teuthology.orchestra.run.vm04.stdout: Birth: 2026-03-09 13:27:37.238000000 +0000 2026-03-09T13:32:50.072 DEBUG:teuthology.orchestra.run.vm04:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-09T13:32:50.143 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records in 2026-03-09T13:32:50.143 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records out 2026-03-09T13:32:50.143 INFO:teuthology.orchestra.run.vm04.stderr:512 bytes copied, 0.000167083 s, 3.1 MB/s 2026-03-09T13:32:50.144 DEBUG:teuthology.orchestra.run.vm04:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-09T13:32:50.214 DEBUG:teuthology.orchestra.run.vm04:> stat /dev/vdc 2026-03-09T13:32:50.275 INFO:teuthology.orchestra.run.vm04.stdout: File: /dev/vdc 2026-03-09T13:32:50.275 INFO:teuthology.orchestra.run.vm04.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-09T13:32:50.275 INFO:teuthology.orchestra.run.vm04.stdout:Device: 6h/6d Inode: 224 Links: 1 Device type: fc,20 2026-03-09T13:32:50.275 INFO:teuthology.orchestra.run.vm04.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T13:32:50.275 INFO:teuthology.orchestra.run.vm04.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-09T13:32:50.275 INFO:teuthology.orchestra.run.vm04.stdout:Access: 2026-03-09 13:32:46.023322057 +0000 2026-03-09T13:32:50.275 INFO:teuthology.orchestra.run.vm04.stdout:Modify: 2026-03-09 13:29:43.334339191 +0000 2026-03-09T13:32:50.275 INFO:teuthology.orchestra.run.vm04.stdout:Change: 2026-03-09 13:29:43.334339191 +0000 2026-03-09T13:32:50.275 INFO:teuthology.orchestra.run.vm04.stdout: Birth: 2026-03-09 13:27:37.250000000 +0000 2026-03-09T13:32:50.275 DEBUG:teuthology.orchestra.run.vm04:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-09T13:32:50.341 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records in 2026-03-09T13:32:50.341 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records out 2026-03-09T13:32:50.341 INFO:teuthology.orchestra.run.vm04.stderr:512 bytes copied, 0.000241071 s, 2.1 MB/s 2026-03-09T13:32:50.342 DEBUG:teuthology.orchestra.run.vm04:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-09T13:32:50.399 DEBUG:teuthology.orchestra.run.vm04:> stat /dev/vdd 2026-03-09T13:32:50.456 INFO:teuthology.orchestra.run.vm04.stdout: File: /dev/vdd 2026-03-09T13:32:50.456 INFO:teuthology.orchestra.run.vm04.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-09T13:32:50.456 INFO:teuthology.orchestra.run.vm04.stdout:Device: 6h/6d Inode: 256 Links: 1 Device type: fc,30 2026-03-09T13:32:50.457 INFO:teuthology.orchestra.run.vm04.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T13:32:50.457 INFO:teuthology.orchestra.run.vm04.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-09T13:32:50.457 INFO:teuthology.orchestra.run.vm04.stdout:Access: 2026-03-09 13:32:46.028322064 +0000 2026-03-09T13:32:50.457 INFO:teuthology.orchestra.run.vm04.stdout:Modify: 2026-03-09 13:29:43.359339216 +0000 2026-03-09T13:32:50.457 INFO:teuthology.orchestra.run.vm04.stdout:Change: 2026-03-09 13:29:43.359339216 +0000 2026-03-09T13:32:50.457 INFO:teuthology.orchestra.run.vm04.stdout: Birth: 2026-03-09 13:27:37.267000000 +0000 2026-03-09T13:32:50.457 DEBUG:teuthology.orchestra.run.vm04:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-09T13:32:50.519 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records in 2026-03-09T13:32:50.520 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records out 2026-03-09T13:32:50.520 INFO:teuthology.orchestra.run.vm04.stderr:512 bytes copied, 0.000153116 s, 3.3 MB/s 2026-03-09T13:32:50.521 DEBUG:teuthology.orchestra.run.vm04:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-09T13:32:50.579 DEBUG:teuthology.orchestra.run.vm04:> stat /dev/vde 2026-03-09T13:32:50.641 INFO:teuthology.orchestra.run.vm04.stdout: File: /dev/vde 2026-03-09T13:32:50.641 INFO:teuthology.orchestra.run.vm04.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-09T13:32:50.641 INFO:teuthology.orchestra.run.vm04.stdout:Device: 6h/6d Inode: 257 Links: 1 Device type: fc,40 2026-03-09T13:32:50.641 INFO:teuthology.orchestra.run.vm04.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T13:32:50.641 INFO:teuthology.orchestra.run.vm04.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-09T13:32:50.642 INFO:teuthology.orchestra.run.vm04.stdout:Access: 2026-03-09 13:32:46.032322070 +0000 2026-03-09T13:32:50.642 INFO:teuthology.orchestra.run.vm04.stdout:Modify: 2026-03-09 13:29:43.420339276 +0000 2026-03-09T13:32:50.642 INFO:teuthology.orchestra.run.vm04.stdout:Change: 2026-03-09 13:29:43.420339276 +0000 2026-03-09T13:32:50.642 INFO:teuthology.orchestra.run.vm04.stdout: Birth: 2026-03-09 13:27:37.270000000 +0000 2026-03-09T13:32:50.642 DEBUG:teuthology.orchestra.run.vm04:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-09T13:32:50.705 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records in 2026-03-09T13:32:50.705 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records out 2026-03-09T13:32:50.705 INFO:teuthology.orchestra.run.vm04.stderr:512 bytes copied, 0.0001977 s, 2.6 MB/s 2026-03-09T13:32:50.706 DEBUG:teuthology.orchestra.run.vm04:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-09T13:32:50.765 INFO:tasks.cephadm:Deploying osd.0 on vm04 with /dev/vde... 2026-03-09T13:32:50.765 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 -- lvm zap /dev/vde 2026-03-09T13:32:50.976 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/mon.a/config 2026-03-09T13:32:51.080 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:50 vm04 ceph-mon[50165]: from='client.14182 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "1;vm04=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:32:51.080 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:50 vm04 ceph-mon[50165]: Saving service mgr spec with placement vm04=a;count:1 2026-03-09T13:32:51.080 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:50 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:32:51.080 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:50 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:32:51.080 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:50 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:32:51.080 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:50 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:32:51.080 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:50 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:32:51.080 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:50 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:32:51.080 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:50 vm04 ceph-mon[50165]: Reconfiguring mgr.a (unknown last config time)... 2026-03-09T13:32:51.080 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:50 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.a", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T13:32:51.080 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:50 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T13:32:51.080 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:50 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:32:51.080 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:50 vm04 ceph-mon[50165]: Reconfiguring daemon mgr.a on vm04 2026-03-09T13:32:51.080 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:50 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:32:51.080 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:50 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:32:51.837 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:32:51.859 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 -- ceph orch daemon add osd vm04:/dev/vde 2026-03-09T13:32:52.032 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/mon.a/config 2026-03-09T13:32:52.328 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:52 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T13:32:52.329 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:52 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T13:32:52.329 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:52 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:32:53.500 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:53 vm04 ceph-mon[50165]: from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:32:53.501 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:53 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/2726686584' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "02b3e414-4f53-4659-8c7c-db2435785cbf"}]: dispatch 2026-03-09T13:32:53.501 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:53 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/2726686584' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "02b3e414-4f53-4659-8c7c-db2435785cbf"}]': finished 2026-03-09T13:32:53.501 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:53 vm04 ceph-mon[50165]: osdmap e5: 1 total, 0 up, 1 in 2026-03-09T13:32:53.501 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:53 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T13:32:54.640 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:54 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/4091774492' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T13:32:57.441 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:57 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T13:32:57.441 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:57 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:32:58.642 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:58 vm04 ceph-mon[50165]: Deploying daemon osd.0 on vm04 2026-03-09T13:32:59.463 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:59 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:32:59.463 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:59 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:32:59.463 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:59 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:32:59.463 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:59 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:32:59.463 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:59 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:32:59.463 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:32:59 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:33:00.374 INFO:teuthology.orchestra.run.vm04.stdout:Created osd(s) 0 on host 'vm04' 2026-03-09T13:33:00.442 DEBUG:teuthology.orchestra.run.vm04:osd.0> sudo journalctl -f -n 0 -u ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@osd.0.service 2026-03-09T13:33:00.444 INFO:tasks.cephadm:Deploying osd.1 on vm04 with /dev/vdd... 2026-03-09T13:33:00.444 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 -- lvm zap /dev/vdd 2026-03-09T13:33:00.567 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:00 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:33:00.567 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:00 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T13:33:00.568 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:00 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:33:00.568 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:00 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:33:00.568 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:00 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:33:00.568 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:00 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:33:00.568 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:00 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:33:00.568 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:33:00 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-0[57314]: 2026-03-09T13:33:00.565+0000 7f8a5136f740 -1 osd.0 0 log_to_monitors true 2026-03-09T13:33:00.677 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/mon.a/config 2026-03-09T13:33:01.566 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:33:01.585 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 -- ceph orch daemon add osd vm04:/dev/vdd 2026-03-09T13:33:01.767 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/mon.a/config 2026-03-09T13:33:01.791 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:01 vm04 ceph-mon[50165]: from='osd.0 [v2:192.168.123.104:6802/2613514074,v1:192.168.123.104:6803/2613514074]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T13:33:02.808 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:02 vm04 ceph-mon[50165]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T13:33:02.808 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:02 vm04 ceph-mon[50165]: from='osd.0 [v2:192.168.123.104:6802/2613514074,v1:192.168.123.104:6803/2613514074]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T13:33:02.808 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:02 vm04 ceph-mon[50165]: osdmap e6: 1 total, 0 up, 1 in 2026-03-09T13:33:02.808 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:02 vm04 ceph-mon[50165]: from='osd.0 [v2:192.168.123.104:6802/2613514074,v1:192.168.123.104:6803/2613514074]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T13:33:02.808 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:02 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T13:33:02.808 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:02 vm04 ceph-mon[50165]: from='client.14193 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:33:02.808 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:02 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T13:33:02.808 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:02 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T13:33:02.808 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:02 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:33:03.891 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:03 vm04 ceph-mon[50165]: from='osd.0 [v2:192.168.123.104:6802/2613514074,v1:192.168.123.104:6803/2613514074]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T13:33:03.891 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:03 vm04 ceph-mon[50165]: osdmap e7: 1 total, 0 up, 1 in 2026-03-09T13:33:03.891 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:03 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T13:33:03.891 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:03 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T13:33:03.891 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:03 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/1952627399' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3b5516e8-9625-400d-bbdb-d62e2b7b4a75"}]: dispatch 2026-03-09T13:33:03.891 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:03 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/1952627399' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "3b5516e8-9625-400d-bbdb-d62e2b7b4a75"}]': finished 2026-03-09T13:33:03.891 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:03 vm04 ceph-mon[50165]: osdmap e8: 2 total, 0 up, 2 in 2026-03-09T13:33:03.891 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:03 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T13:33:03.891 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:03 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T13:33:03.891 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:03 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/446115535' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T13:33:03.891 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:33:03 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-0[57314]: 2026-03-09T13:33:03.507+0000 7f8a4d2f0640 -1 osd.0 0 waiting for initial osdmap 2026-03-09T13:33:03.891 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:33:03 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-0[57314]: 2026-03-09T13:33:03.513+0000 7f8a48919640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T13:33:04.891 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:04 vm04 ceph-mon[50165]: purged_snaps scrub starts 2026-03-09T13:33:04.891 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:04 vm04 ceph-mon[50165]: purged_snaps scrub ok 2026-03-09T13:33:04.891 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:04 vm04 ceph-mon[50165]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T13:33:04.891 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:04 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T13:33:04.891 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:04 vm04 ceph-mon[50165]: from='osd.0 [v2:192.168.123.104:6802/2613514074,v1:192.168.123.104:6803/2613514074]' entity='osd.0' 2026-03-09T13:33:05.540 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:05 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T13:33:05.540 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:05 vm04 ceph-mon[50165]: osd.0 [v2:192.168.123.104:6802/2613514074,v1:192.168.123.104:6803/2613514074] boot 2026-03-09T13:33:05.540 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:05 vm04 ceph-mon[50165]: osdmap e9: 2 total, 1 up, 2 in 2026-03-09T13:33:05.540 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:05 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T13:33:05.540 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:05 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T13:33:06.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:06 vm04 ceph-mon[50165]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T13:33:06.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:06 vm04 ceph-mon[50165]: osdmap e10: 2 total, 1 up, 2 in 2026-03-09T13:33:06.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:06 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T13:33:06.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:06 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:33:06.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:06 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:33:06.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:06 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:33:07.553 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:07 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T13:33:07.553 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:07 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:33:08.891 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:08 vm04 ceph-mon[50165]: Deploying daemon osd.1 on vm04 2026-03-09T13:33:08.891 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:08 vm04 ceph-mon[50165]: pgmap v13: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T13:33:09.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:09 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:33:09.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:09 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:33:09.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:09 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:33:09.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:09 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:33:09.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:09 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:33:09.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:09 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:33:10.330 INFO:teuthology.orchestra.run.vm04.stdout:Created osd(s) 1 on host 'vm04' 2026-03-09T13:33:10.382 DEBUG:teuthology.orchestra.run.vm04:osd.1> sudo journalctl -f -n 0 -u ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@osd.1.service 2026-03-09T13:33:10.384 INFO:tasks.cephadm:Deploying osd.2 on vm04 with /dev/vdc... 2026-03-09T13:33:10.384 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 -- lvm zap /dev/vdc 2026-03-09T13:33:10.613 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/mon.a/config 2026-03-09T13:33:10.636 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:10 vm04 ceph-mon[50165]: pgmap v14: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T13:33:10.636 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:10 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:33:10.636 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:10 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:33:10.636 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:10 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T13:33:10.636 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:10 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:33:10.636 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:10 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:33:10.636 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:10 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:33:10.636 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:10 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:33:10.891 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 13:33:10 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-1[60366]: 2026-03-09T13:33:10.835+0000 7f9062860740 -1 osd.1 0 log_to_monitors true 2026-03-09T13:33:11.434 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:33:11.452 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 -- ceph orch daemon add osd vm04:/dev/vdc 2026-03-09T13:33:11.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:11 vm04 ceph-mon[50165]: from='osd.1 [v2:192.168.123.104:6810/932433409,v1:192.168.123.104:6811/932433409]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T13:33:11.642 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/mon.a/config 2026-03-09T13:33:12.752 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 13:33:12 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-1[60366]: 2026-03-09T13:33:12.540+0000 7f905eff4640 -1 osd.1 0 waiting for initial osdmap 2026-03-09T13:33:12.752 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 13:33:12 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-1[60366]: 2026-03-09T13:33:12.549+0000 7f905a60b640 -1 osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T13:33:12.752 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:12 vm04 ceph-mon[50165]: pgmap v15: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T13:33:12.752 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:12 vm04 ceph-mon[50165]: from='osd.1 [v2:192.168.123.104:6810/932433409,v1:192.168.123.104:6811/932433409]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T13:33:12.752 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:12 vm04 ceph-mon[50165]: osdmap e11: 2 total, 1 up, 2 in 2026-03-09T13:33:12.752 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:12 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T13:33:12.752 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:12 vm04 ceph-mon[50165]: from='osd.1 [v2:192.168.123.104:6810/932433409,v1:192.168.123.104:6811/932433409]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T13:33:12.752 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:12 vm04 ceph-mon[50165]: from='client.14202 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:33:12.752 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:12 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T13:33:12.752 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:12 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T13:33:12.752 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:12 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:33:13.542 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:13 vm04 ceph-mon[50165]: from='osd.1 [v2:192.168.123.104:6810/932433409,v1:192.168.123.104:6811/932433409]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T13:33:13.542 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:13 vm04 ceph-mon[50165]: osdmap e12: 2 total, 1 up, 2 in 2026-03-09T13:33:13.542 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:13 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T13:33:13.542 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:13 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/1745066935' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c0e8bd08-9d2a-45fa-866c-c46c2a2146de"}]: dispatch 2026-03-09T13:33:13.890 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:13 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/1745066935' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c0e8bd08-9d2a-45fa-866c-c46c2a2146de"}]': finished 2026-03-09T13:33:13.890 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:13 vm04 ceph-mon[50165]: osd.1 [v2:192.168.123.104:6810/932433409,v1:192.168.123.104:6811/932433409] boot 2026-03-09T13:33:13.890 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:13 vm04 ceph-mon[50165]: osdmap e13: 3 total, 2 up, 3 in 2026-03-09T13:33:13.890 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:13 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T13:33:13.890 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:13 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T13:33:13.891 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:13 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/3266244849' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T13:33:15.140 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:14 vm04 ceph-mon[50165]: purged_snaps scrub starts 2026-03-09T13:33:15.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:14 vm04 ceph-mon[50165]: purged_snaps scrub ok 2026-03-09T13:33:15.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:14 vm04 ceph-mon[50165]: pgmap v19: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T13:33:15.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:14 vm04 ceph-mon[50165]: osdmap e14: 3 total, 2 up, 3 in 2026-03-09T13:33:15.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:14 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T13:33:16.693 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:16 vm04 ceph-mon[50165]: pgmap v21: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T13:33:17.872 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:17 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T13:33:17.872 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:17 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:33:19.004 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:18 vm04 ceph-mon[50165]: Deploying daemon osd.2 on vm04 2026-03-09T13:33:19.004 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:18 vm04 ceph-mon[50165]: pgmap v22: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T13:33:19.699 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:19 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:33:19.699 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:19 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:33:19.699 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:19 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:33:19.699 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:19 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:33:19.699 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:19 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:33:19.699 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:19 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:33:20.188 INFO:teuthology.orchestra.run.vm04.stdout:Created osd(s) 2 on host 'vm04' 2026-03-09T13:33:20.246 DEBUG:teuthology.orchestra.run.vm04:osd.2> sudo journalctl -f -n 0 -u ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@osd.2.service 2026-03-09T13:33:20.247 INFO:tasks.cephadm:Waiting for 3 OSDs to come up... 2026-03-09T13:33:20.247 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 -- ceph osd stat -f json 2026-03-09T13:33:20.382 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:33:20 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-2[63131]: 2026-03-09T13:33:20.381+0000 7efdc5ef6740 -1 osd.2 0 log_to_monitors true 2026-03-09T13:33:20.456 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/mon.a/config 2026-03-09T13:33:20.689 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:33:20.741 INFO:teuthology.orchestra.run.vm04.stdout:{"epoch":14,"num_osds":3,"num_up_osds":2,"osd_up_since":1773063192,"num_in_osds":3,"osd_in_since":1773063192,"num_remapped_pgs":0} 2026-03-09T13:33:21.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:20 vm04 ceph-mon[50165]: pgmap v23: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T13:33:21.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:20 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:33:21.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:20 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:33:21.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:20 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T13:33:21.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:20 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:33:21.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:20 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:33:21.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:20 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:33:21.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:20 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:33:21.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:20 vm04 ceph-mon[50165]: from='osd.2 [v2:192.168.123.104:6818/754485982,v1:192.168.123.104:6819/754485982]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T13:33:21.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:20 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/2184246537' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T13:33:21.743 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 -- ceph osd stat -f json 2026-03-09T13:33:21.915 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/mon.a/config 2026-03-09T13:33:22.153 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:33:22.203 INFO:teuthology.orchestra.run.vm04.stdout:{"epoch":15,"num_osds":3,"num_up_osds":2,"osd_up_since":1773063192,"num_in_osds":3,"osd_in_since":1773063192,"num_remapped_pgs":0} 2026-03-09T13:33:22.391 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:22 vm04 ceph-mon[50165]: from='osd.2 [v2:192.168.123.104:6818/754485982,v1:192.168.123.104:6819/754485982]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T13:33:22.415 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:22 vm04 ceph-mon[50165]: osdmap e15: 3 total, 2 up, 3 in 2026-03-09T13:33:22.415 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:22 vm04 ceph-mon[50165]: from='osd.2 [v2:192.168.123.104:6818/754485982,v1:192.168.123.104:6819/754485982]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T13:33:22.415 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:22 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T13:33:22.415 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:22 vm04 ceph-mon[50165]: pgmap v25: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T13:33:22.415 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:22 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/1274089666' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T13:33:23.204 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 -- ceph osd stat -f json 2026-03-09T13:33:23.415 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/mon.a/config 2026-03-09T13:33:23.478 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:23 vm04 ceph-mon[50165]: from='osd.2 [v2:192.168.123.104:6818/754485982,v1:192.168.123.104:6819/754485982]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T13:33:23.479 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:23 vm04 ceph-mon[50165]: osdmap e16: 3 total, 2 up, 3 in 2026-03-09T13:33:23.479 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:23 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T13:33:23.479 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:23 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T13:33:23.479 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:33:23 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-2[63131]: 2026-03-09T13:33:23.466+0000 7efdc1e77640 -1 osd.2 0 waiting for initial osdmap 2026-03-09T13:33:23.479 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:33:23 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-2[63131]: 2026-03-09T13:33:23.477+0000 7efdbd4a0640 -1 osd.2 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T13:33:23.667 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:33:23.740 INFO:teuthology.orchestra.run.vm04.stdout:{"epoch":16,"num_osds":3,"num_up_osds":2,"osd_up_since":1773063192,"num_in_osds":3,"osd_in_since":1773063192,"num_remapped_pgs":0} 2026-03-09T13:33:24.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:24 vm04 ceph-mon[50165]: purged_snaps scrub starts 2026-03-09T13:33:24.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:24 vm04 ceph-mon[50165]: purged_snaps scrub ok 2026-03-09T13:33:24.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:24 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T13:33:24.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:24 vm04 ceph-mon[50165]: pgmap v27: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T13:33:24.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:24 vm04 ceph-mon[50165]: from='osd.2 [v2:192.168.123.104:6818/754485982,v1:192.168.123.104:6819/754485982]' entity='osd.2' 2026-03-09T13:33:24.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:24 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/473052189' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T13:33:24.741 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 -- ceph osd stat -f json 2026-03-09T13:33:24.918 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/mon.a/config 2026-03-09T13:33:25.152 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:33:25.670 INFO:teuthology.orchestra.run.vm04.stdout:{"epoch":17,"num_osds":3,"num_up_osds":3,"osd_up_since":1773063204,"num_in_osds":3,"osd_in_since":1773063192,"num_remapped_pgs":0} 2026-03-09T13:33:25.670 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 -- ceph osd dump --format=json 2026-03-09T13:33:25.819 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:25 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T13:33:25.819 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:25 vm04 ceph-mon[50165]: osd.2 [v2:192.168.123.104:6818/754485982,v1:192.168.123.104:6819/754485982] boot 2026-03-09T13:33:25.819 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:25 vm04 ceph-mon[50165]: osdmap e17: 3 total, 3 up, 3 in 2026-03-09T13:33:25.819 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:25 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T13:33:25.819 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:25 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/2622559769' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T13:33:25.903 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/mon.a/config 2026-03-09T13:33:26.291 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:33:26.291 INFO:teuthology.orchestra.run.vm04.stdout:{"epoch":18,"fsid":"2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20","created":"2026-03-09T13:32:21.156692+0000","modified":"2026-03-09T13:33:25.469795+0000","last_up_change":"2026-03-09T13:33:24.468570+0000","last_in_change":"2026-03-09T13:33:12.683825+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":8,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":3,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-09T13:33:25.392642+0000","flags":32769,"flags_names":"hashpspool,creating","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"18","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{},"read_balance":{"score_type":"Fair distribution","score_acting":3,"score_stable":3,"optimal_score":1,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"02b3e414-4f53-4659-8c7c-db2435785cbf","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":9,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6802","nonce":2613514074},{"type":"v1","addr":"192.168.123.104:6803","nonce":2613514074}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6804","nonce":2613514074},{"type":"v1","addr":"192.168.123.104:6805","nonce":2613514074}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6808","nonce":2613514074},{"type":"v1","addr":"192.168.123.104:6809","nonce":2613514074}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6806","nonce":2613514074},{"type":"v1","addr":"192.168.123.104:6807","nonce":2613514074}]},"public_addr":"192.168.123.104:6803/2613514074","cluster_addr":"192.168.123.104:6805/2613514074","heartbeat_back_addr":"192.168.123.104:6809/2613514074","heartbeat_front_addr":"192.168.123.104:6807/2613514074","state":["exists","up"]},{"osd":1,"uuid":"3b5516e8-9625-400d-bbdb-d62e2b7b4a75","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6810","nonce":932433409},{"type":"v1","addr":"192.168.123.104:6811","nonce":932433409}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6812","nonce":932433409},{"type":"v1","addr":"192.168.123.104:6813","nonce":932433409}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6816","nonce":932433409},{"type":"v1","addr":"192.168.123.104:6817","nonce":932433409}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6814","nonce":932433409},{"type":"v1","addr":"192.168.123.104:6815","nonce":932433409}]},"public_addr":"192.168.123.104:6811/932433409","cluster_addr":"192.168.123.104:6813/932433409","heartbeat_back_addr":"192.168.123.104:6817/932433409","heartbeat_front_addr":"192.168.123.104:6815/932433409","state":["exists","up"]},{"osd":2,"uuid":"c0e8bd08-9d2a-45fa-866c-c46c2a2146de","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":17,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6818","nonce":754485982},{"type":"v1","addr":"192.168.123.104:6819","nonce":754485982}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6820","nonce":754485982},{"type":"v1","addr":"192.168.123.104:6821","nonce":754485982}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6824","nonce":754485982},{"type":"v1","addr":"192.168.123.104:6825","nonce":754485982}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6822","nonce":754485982},{"type":"v1","addr":"192.168.123.104:6823","nonce":754485982}]},"public_addr":"192.168.123.104:6819/754485982","cluster_addr":"192.168.123.104:6821/754485982","heartbeat_back_addr":"192.168.123.104:6825/754485982","heartbeat_front_addr":"192.168.123.104:6823/754485982","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T13:33:01.577396+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T13:33:11.852078+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T13:33:21.361160+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.104:0/3592682446":"2026-03-10T13:32:41.104402+0000","192.168.123.104:6801/3092625564":"2026-03-10T13:32:41.104402+0000","192.168.123.104:0/1970530291":"2026-03-10T13:32:31.812122+0000","192.168.123.104:0/3774991332":"2026-03-10T13:32:31.812122+0000","192.168.123.104:0/1271325660":"2026-03-10T13:32:31.812122+0000","192.168.123.104:0/982753979":"2026-03-10T13:32:41.104402+0000","192.168.123.104:6800/3092625564":"2026-03-10T13:32:41.104402+0000","192.168.123.104:6801/4248435815":"2026-03-10T13:32:31.812122+0000","192.168.123.104:0/3390144586":"2026-03-10T13:32:41.104402+0000","192.168.123.104:6800/4248435815":"2026-03-10T13:32:31.812122+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-09T13:33:26.357 INFO:tasks.cephadm.ceph_manager.ceph:[{'pool': 1, 'pool_name': '.mgr', 'create_time': '2026-03-09T13:33:25.392642+0000', 'flags': 32769, 'flags_names': 'hashpspool,creating', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'is_stretch_pool': False, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '18', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {}, 'read_balance': {'score_type': 'Fair distribution', 'score_acting': 3, 'score_stable': 3, 'optimal_score': 1, 'raw_score_acting': 3, 'raw_score_stable': 3, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}] 2026-03-09T13:33:26.357 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 -- ceph osd pool get .mgr pg_num 2026-03-09T13:33:26.606 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/mon.a/config 2026-03-09T13:33:26.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:26 vm04 ceph-mon[50165]: pgmap v29: 0 pgs: ; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T13:33:26.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:26 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-09T13:33:26.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:26 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-09T13:33:26.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:26 vm04 ceph-mon[50165]: osdmap e18: 3 total, 3 up, 3 in 2026-03-09T13:33:26.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:26 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T13:33:26.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:26 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:33:26.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:26 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:33:26.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:26 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:33:26.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:26 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:33:26.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:26 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:33:26.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:26 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:33:26.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:26 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:33:26.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:26 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/3562202765' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T13:33:26.910 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:33:26 vm04 sudo[64751]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vde 2026-03-09T13:33:26.910 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:33:26 vm04 sudo[64751]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-09T13:33:26.911 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:33:26 vm04 sudo[64751]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-09T13:33:26.911 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:33:26 vm04 sudo[64751]: pam_unix(sudo:session): session closed for user root 2026-03-09T13:33:26.911 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 13:33:26 vm04 sudo[64758]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vdd 2026-03-09T13:33:26.911 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 13:33:26 vm04 sudo[64758]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-09T13:33:26.911 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 13:33:26 vm04 sudo[64758]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-09T13:33:26.911 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 13:33:26 vm04 sudo[64758]: pam_unix(sudo:session): session closed for user root 2026-03-09T13:33:26.911 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:33:26 vm04 sudo[64766]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vdc 2026-03-09T13:33:26.911 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:33:26 vm04 sudo[64766]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-09T13:33:26.911 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:33:26 vm04 sudo[64766]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-09T13:33:26.911 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:33:26 vm04 sudo[64766]: pam_unix(sudo:session): session closed for user root 2026-03-09T13:33:26.911 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:26 vm04 sudo[64777]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda 2026-03-09T13:33:26.911 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:26 vm04 sudo[64777]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-09T13:33:26.911 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:26 vm04 sudo[64777]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-09T13:33:26.916 INFO:teuthology.orchestra.run.vm04.stdout:pg_num: 1 2026-03-09T13:33:27.000 INFO:tasks.cephadm:Setting up client nodes... 2026-03-09T13:33:27.001 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 -- ceph auth get-or-create client.0 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-09T13:33:27.199 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/mon.a/config 2026-03-09T13:33:27.223 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:26 vm04 sudo[64777]: pam_unix(sudo:session): session closed for user root 2026-03-09T13:33:27.465 INFO:teuthology.orchestra.run.vm04.stdout:[client.0] 2026-03-09T13:33:27.466 INFO:teuthology.orchestra.run.vm04.stdout: key = AQAnzK5pQe6UGxAAs97hyI96lp4DBISzy5HUNA== 2026-03-09T13:33:27.525 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T13:33:27.525 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/etc/ceph/ceph.client.0.keyring 2026-03-09T13:33:27.525 DEBUG:teuthology.orchestra.run.vm04:> sudo chmod 0644 /etc/ceph/ceph.client.0.keyring 2026-03-09T13:33:27.558 INFO:tasks.ceph:Waiting until ceph daemons up and pgs clean... 2026-03-09T13:33:27.559 INFO:tasks.cephadm.ceph_manager.ceph:waiting for mgr available 2026-03-09T13:33:27.559 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 -- ceph mgr dump --format=json 2026-03-09T13:33:27.757 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/mon.a/config 2026-03-09T13:33:27.779 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:27 vm04 ceph-mon[50165]: Detected new or changed devices on vm04 2026-03-09T13:33:27.779 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:27 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T13:33:27.779 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:27 vm04 ceph-mon[50165]: osdmap e19: 3 total, 3 up, 3 in 2026-03-09T13:33:27.779 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:27 vm04 ceph-mon[50165]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T13:33:27.779 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:27 vm04 ceph-mon[50165]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T13:33:27.779 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:27 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T13:33:27.779 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:27 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/4114411039' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T13:33:27.779 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:27 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:33:27.779 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:27 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:33:27.779 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:27 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:33:27.779 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:27 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:33:27.779 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:27 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:33:27.779 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:27 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:33:27.779 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:27 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:33:27.779 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:27 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/1636530194' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T13:33:27.779 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:27 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/1636530194' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T13:33:28.001 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:33:28.049 INFO:teuthology.orchestra.run.vm04.stdout:{"epoch":12,"flags":0,"active_gid":14150,"active_name":"a","active_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6800","nonce":1985662317},{"type":"v1","addr":"192.168.123.104:6801","nonce":1985662317}]},"active_addr":"192.168.123.104:6801/1985662317","active_change":"2026-03-09T13:32:41.104513+0000","active_mgr_features":4540701547738038271,"available":true,"standbys":[],"modules":["cephadm","dashboard","iostat","nfs","restful"],"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}],"services":{"dashboard":"https://192.168.123.104:8443/"},"always_on_modules":{"octopus":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"pacific":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"quincy":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"reef":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"squid":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"]},"force_disabled_modules":{},"last_failure_osd_epoch":3,"active_clients":[{"name":"libcephsqlite","addrvec":[{"type":"v2","addr":"192.168.123.104:0","nonce":1749667098}]},{"name":"rbd_support","addrvec":[{"type":"v2","addr":"192.168.123.104:0","nonce":1738078771}]},{"name":"volumes","addrvec":[{"type":"v2","addr":"192.168.123.104:0","nonce":2691501580}]}]} 2026-03-09T13:33:28.050 INFO:tasks.cephadm.ceph_manager.ceph:mgr available! 2026-03-09T13:33:28.050 INFO:tasks.cephadm.ceph_manager.ceph:waiting for all up 2026-03-09T13:33:28.050 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 -- ceph osd dump --format=json 2026-03-09T13:33:28.225 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/mon.a/config 2026-03-09T13:33:28.441 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:33:28.441 INFO:teuthology.orchestra.run.vm04.stdout:{"epoch":20,"fsid":"2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20","created":"2026-03-09T13:32:21.156692+0000","modified":"2026-03-09T13:33:27.642979+0000","last_up_change":"2026-03-09T13:33:24.468570+0000","last_in_change":"2026-03-09T13:33:12.683825+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":8,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":3,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-09T13:33:25.392642+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"20","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":3,"score_stable":3,"optimal_score":1,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"02b3e414-4f53-4659-8c7c-db2435785cbf","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":9,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6802","nonce":2613514074},{"type":"v1","addr":"192.168.123.104:6803","nonce":2613514074}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6804","nonce":2613514074},{"type":"v1","addr":"192.168.123.104:6805","nonce":2613514074}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6808","nonce":2613514074},{"type":"v1","addr":"192.168.123.104:6809","nonce":2613514074}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6806","nonce":2613514074},{"type":"v1","addr":"192.168.123.104:6807","nonce":2613514074}]},"public_addr":"192.168.123.104:6803/2613514074","cluster_addr":"192.168.123.104:6805/2613514074","heartbeat_back_addr":"192.168.123.104:6809/2613514074","heartbeat_front_addr":"192.168.123.104:6807/2613514074","state":["exists","up"]},{"osd":1,"uuid":"3b5516e8-9625-400d-bbdb-d62e2b7b4a75","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":18,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6810","nonce":932433409},{"type":"v1","addr":"192.168.123.104:6811","nonce":932433409}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6812","nonce":932433409},{"type":"v1","addr":"192.168.123.104:6813","nonce":932433409}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6816","nonce":932433409},{"type":"v1","addr":"192.168.123.104:6817","nonce":932433409}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6814","nonce":932433409},{"type":"v1","addr":"192.168.123.104:6815","nonce":932433409}]},"public_addr":"192.168.123.104:6811/932433409","cluster_addr":"192.168.123.104:6813/932433409","heartbeat_back_addr":"192.168.123.104:6817/932433409","heartbeat_front_addr":"192.168.123.104:6815/932433409","state":["exists","up"]},{"osd":2,"uuid":"c0e8bd08-9d2a-45fa-866c-c46c2a2146de","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":17,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6818","nonce":754485982},{"type":"v1","addr":"192.168.123.104:6819","nonce":754485982}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6820","nonce":754485982},{"type":"v1","addr":"192.168.123.104:6821","nonce":754485982}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6824","nonce":754485982},{"type":"v1","addr":"192.168.123.104:6825","nonce":754485982}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6822","nonce":754485982},{"type":"v1","addr":"192.168.123.104:6823","nonce":754485982}]},"public_addr":"192.168.123.104:6819/754485982","cluster_addr":"192.168.123.104:6821/754485982","heartbeat_back_addr":"192.168.123.104:6825/754485982","heartbeat_front_addr":"192.168.123.104:6823/754485982","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T13:33:01.577396+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T13:33:11.852078+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T13:33:21.361160+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.104:0/3592682446":"2026-03-10T13:32:41.104402+0000","192.168.123.104:6801/3092625564":"2026-03-10T13:32:41.104402+0000","192.168.123.104:0/1970530291":"2026-03-10T13:32:31.812122+0000","192.168.123.104:0/3774991332":"2026-03-10T13:32:31.812122+0000","192.168.123.104:0/1271325660":"2026-03-10T13:32:31.812122+0000","192.168.123.104:0/982753979":"2026-03-10T13:32:41.104402+0000","192.168.123.104:6800/3092625564":"2026-03-10T13:32:41.104402+0000","192.168.123.104:6801/4248435815":"2026-03-10T13:32:31.812122+0000","192.168.123.104:0/3390144586":"2026-03-10T13:32:41.104402+0000","192.168.123.104:6800/4248435815":"2026-03-10T13:32:31.812122+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-09T13:33:28.485 INFO:tasks.cephadm.ceph_manager.ceph:all up! 2026-03-09T13:33:28.485 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 -- ceph osd dump --format=json 2026-03-09T13:33:28.640 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/mon.a/config 2026-03-09T13:33:28.864 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:33:28.864 INFO:teuthology.orchestra.run.vm04.stdout:{"epoch":20,"fsid":"2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20","created":"2026-03-09T13:32:21.156692+0000","modified":"2026-03-09T13:33:27.642979+0000","last_up_change":"2026-03-09T13:33:24.468570+0000","last_in_change":"2026-03-09T13:33:12.683825+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":8,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":3,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-09T13:33:25.392642+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"20","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":3,"score_stable":3,"optimal_score":1,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"02b3e414-4f53-4659-8c7c-db2435785cbf","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":9,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6802","nonce":2613514074},{"type":"v1","addr":"192.168.123.104:6803","nonce":2613514074}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6804","nonce":2613514074},{"type":"v1","addr":"192.168.123.104:6805","nonce":2613514074}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6808","nonce":2613514074},{"type":"v1","addr":"192.168.123.104:6809","nonce":2613514074}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6806","nonce":2613514074},{"type":"v1","addr":"192.168.123.104:6807","nonce":2613514074}]},"public_addr":"192.168.123.104:6803/2613514074","cluster_addr":"192.168.123.104:6805/2613514074","heartbeat_back_addr":"192.168.123.104:6809/2613514074","heartbeat_front_addr":"192.168.123.104:6807/2613514074","state":["exists","up"]},{"osd":1,"uuid":"3b5516e8-9625-400d-bbdb-d62e2b7b4a75","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":18,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6810","nonce":932433409},{"type":"v1","addr":"192.168.123.104:6811","nonce":932433409}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6812","nonce":932433409},{"type":"v1","addr":"192.168.123.104:6813","nonce":932433409}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6816","nonce":932433409},{"type":"v1","addr":"192.168.123.104:6817","nonce":932433409}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6814","nonce":932433409},{"type":"v1","addr":"192.168.123.104:6815","nonce":932433409}]},"public_addr":"192.168.123.104:6811/932433409","cluster_addr":"192.168.123.104:6813/932433409","heartbeat_back_addr":"192.168.123.104:6817/932433409","heartbeat_front_addr":"192.168.123.104:6815/932433409","state":["exists","up"]},{"osd":2,"uuid":"c0e8bd08-9d2a-45fa-866c-c46c2a2146de","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":17,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6818","nonce":754485982},{"type":"v1","addr":"192.168.123.104:6819","nonce":754485982}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6820","nonce":754485982},{"type":"v1","addr":"192.168.123.104:6821","nonce":754485982}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6824","nonce":754485982},{"type":"v1","addr":"192.168.123.104:6825","nonce":754485982}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6822","nonce":754485982},{"type":"v1","addr":"192.168.123.104:6823","nonce":754485982}]},"public_addr":"192.168.123.104:6819/754485982","cluster_addr":"192.168.123.104:6821/754485982","heartbeat_back_addr":"192.168.123.104:6825/754485982","heartbeat_front_addr":"192.168.123.104:6823/754485982","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T13:33:01.577396+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T13:33:11.852078+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T13:33:21.361160+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.104:0/3592682446":"2026-03-10T13:32:41.104402+0000","192.168.123.104:6801/3092625564":"2026-03-10T13:32:41.104402+0000","192.168.123.104:0/1970530291":"2026-03-10T13:32:31.812122+0000","192.168.123.104:0/3774991332":"2026-03-10T13:32:31.812122+0000","192.168.123.104:0/1271325660":"2026-03-10T13:32:31.812122+0000","192.168.123.104:0/982753979":"2026-03-10T13:32:41.104402+0000","192.168.123.104:6800/3092625564":"2026-03-10T13:32:41.104402+0000","192.168.123.104:6801/4248435815":"2026-03-10T13:32:31.812122+0000","192.168.123.104:0/3390144586":"2026-03-10T13:32:41.104402+0000","192.168.123.104:6800/4248435815":"2026-03-10T13:32:31.812122+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-09T13:33:28.904 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:28 vm04 ceph-mon[50165]: Detected new or changed devices on vm04 2026-03-09T13:33:28.904 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:28 vm04 ceph-mon[50165]: pgmap v32: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T13:33:28.904 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:28 vm04 ceph-mon[50165]: osdmap e20: 3 total, 3 up, 3 in 2026-03-09T13:33:28.904 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:28 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/898062584' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T13:33:28.904 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:28 vm04 ceph-mon[50165]: mgrmap e13: a(active, since 47s) 2026-03-09T13:33:28.904 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:28 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/3470724075' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T13:33:28.931 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 -- ceph tell osd.0 flush_pg_stats 2026-03-09T13:33:28.932 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 -- ceph tell osd.1 flush_pg_stats 2026-03-09T13:33:28.932 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 -- ceph tell osd.2 flush_pg_stats 2026-03-09T13:33:29.147 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/mon.a/config 2026-03-09T13:33:29.261 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/mon.a/config 2026-03-09T13:33:29.307 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/mon.a/config 2026-03-09T13:33:29.381 INFO:teuthology.orchestra.run.vm04.stdout:73014444034 2026-03-09T13:33:29.381 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 -- ceph osd last-stat-seq osd.2 2026-03-09T13:33:29.605 INFO:teuthology.orchestra.run.vm04.stdout:38654705670 2026-03-09T13:33:29.605 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 -- ceph osd last-stat-seq osd.0 2026-03-09T13:33:29.626 INFO:teuthology.orchestra.run.vm04.stdout:55834574853 2026-03-09T13:33:29.626 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 -- ceph osd last-stat-seq osd.1 2026-03-09T13:33:29.638 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/mon.a/config 2026-03-09T13:33:29.693 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:29 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/243794205' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T13:33:29.928 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/mon.a/config 2026-03-09T13:33:29.934 INFO:teuthology.orchestra.run.vm04.stdout:73014444034 2026-03-09T13:33:29.936 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/mon.a/config 2026-03-09T13:33:30.000 INFO:tasks.cephadm.ceph_manager.ceph:need seq 73014444034 got 73014444034 for osd.2 2026-03-09T13:33:30.000 DEBUG:teuthology.parallel:result is None 2026-03-09T13:33:30.208 INFO:teuthology.orchestra.run.vm04.stdout:55834574852 2026-03-09T13:33:30.227 INFO:teuthology.orchestra.run.vm04.stdout:38654705669 2026-03-09T13:33:30.285 INFO:tasks.cephadm.ceph_manager.ceph:need seq 55834574853 got 55834574852 for osd.1 2026-03-09T13:33:30.315 INFO:tasks.cephadm.ceph_manager.ceph:need seq 38654705670 got 38654705669 for osd.0 2026-03-09T13:33:31.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:30 vm04 ceph-mon[50165]: pgmap v34: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T13:33:31.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:30 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/2476319737' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T13:33:31.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:30 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/383901402' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T13:33:31.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:30 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/1956669658' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T13:33:31.286 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 -- ceph osd last-stat-seq osd.1 2026-03-09T13:33:31.316 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 -- ceph osd last-stat-seq osd.0 2026-03-09T13:33:31.474 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/mon.a/config 2026-03-09T13:33:31.560 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/mon.a/config 2026-03-09T13:33:31.727 INFO:teuthology.orchestra.run.vm04.stdout:55834574853 2026-03-09T13:33:31.793 INFO:tasks.cephadm.ceph_manager.ceph:need seq 55834574853 got 55834574853 for osd.1 2026-03-09T13:33:31.793 DEBUG:teuthology.parallel:result is None 2026-03-09T13:33:31.805 INFO:teuthology.orchestra.run.vm04.stdout:38654705671 2026-03-09T13:33:31.872 INFO:tasks.cephadm.ceph_manager.ceph:need seq 38654705670 got 38654705671 for osd.0 2026-03-09T13:33:31.872 DEBUG:teuthology.parallel:result is None 2026-03-09T13:33:31.872 INFO:tasks.cephadm.ceph_manager.ceph:waiting for clean 2026-03-09T13:33:31.872 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 -- ceph pg dump --format=json 2026-03-09T13:33:32.097 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/mon.a/config 2026-03-09T13:33:32.316 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:33:32.317 INFO:teuthology.orchestra.run.vm04.stderr:dumped all 2026-03-09T13:33:32.359 INFO:teuthology.orchestra.run.vm04.stdout:{"pg_ready":true,"pg_map":{"version":35,"stamp":"2026-03-09T13:33:31.345312+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":3,"num_osds":3,"num_per_pool_osds":3,"num_per_pool_omap_osds":3,"kb":62902272,"kb_used":82776,"kb_used_data":1860,"kb_used_omap":4,"kb_used_meta":80443,"kb_avail":62819496,"statfs":{"total":64411926528,"available":64327163904,"internally_reserved":0,"allocated":1904640,"data_stored":1528500,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":4770,"internal_metadata":82373982},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"2.000252"},"pg_stats":[{"pgid":"1.0","version":"19'32","reported_seq":57,"reported_epoch":20,"state":"active+clean","last_fresh":"2026-03-09T13:33:27.645854+0000","last_change":"2026-03-09T13:33:26.689898+0000","last_active":"2026-03-09T13:33:27.645854+0000","last_peered":"2026-03-09T13:33:27.645854+0000","last_clean":"2026-03-09T13:33:27.645854+0000","last_became_active":"2026-03-09T13:33:26.689757+0000","last_became_peered":"2026-03-09T13:33:26.689757+0000","last_unstale":"2026-03-09T13:33:27.645854+0000","last_undegraded":"2026-03-09T13:33:27.645854+0000","last_fullsized":"2026-03-09T13:33:27.645854+0000","mapping_epoch":18,"log_start":"0'0","ondisk_log_start":"0'0","created":18,"last_epoch_clean":19,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T13:33:25.469795+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T13:33:25.469795+0000","last_clean_scrub_stamp":"2026-03-09T13:33:25.469795+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:36:19.548700+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0,2],"acting":[1,0,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]}],"pool_stats":[{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":3}],"osd_stats":[{"osd":2,"up_from":17,"seq":73014444035,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27596,"kb_used_data":620,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939828,"statfs":{"total":21470642176,"available":21442383872,"internally_reserved":0,"allocated":634880,"data_stored":509500,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":13,"seq":55834574853,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27588,"kb_used_data":620,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939836,"statfs":{"total":21470642176,"available":21442392064,"internally_reserved":0,"allocated":634880,"data_stored":509500,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":9,"seq":38654705671,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27592,"kb_used_data":620,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939832,"statfs":{"total":21470642176,"available":21442387968,"internally_reserved":0,"allocated":634880,"data_stored":509500,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[1,2],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-09T13:33:32.359 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 -- ceph pg dump --format=json 2026-03-09T13:33:32.520 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/mon.a/config 2026-03-09T13:33:32.742 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:33:32.743 INFO:teuthology.orchestra.run.vm04.stderr:dumped all 2026-03-09T13:33:32.794 INFO:teuthology.orchestra.run.vm04.stdout:{"pg_ready":true,"pg_map":{"version":35,"stamp":"2026-03-09T13:33:31.345312+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":3,"num_osds":3,"num_per_pool_osds":3,"num_per_pool_omap_osds":3,"kb":62902272,"kb_used":82776,"kb_used_data":1860,"kb_used_omap":4,"kb_used_meta":80443,"kb_avail":62819496,"statfs":{"total":64411926528,"available":64327163904,"internally_reserved":0,"allocated":1904640,"data_stored":1528500,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":4770,"internal_metadata":82373982},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"2.000252"},"pg_stats":[{"pgid":"1.0","version":"19'32","reported_seq":57,"reported_epoch":20,"state":"active+clean","last_fresh":"2026-03-09T13:33:27.645854+0000","last_change":"2026-03-09T13:33:26.689898+0000","last_active":"2026-03-09T13:33:27.645854+0000","last_peered":"2026-03-09T13:33:27.645854+0000","last_clean":"2026-03-09T13:33:27.645854+0000","last_became_active":"2026-03-09T13:33:26.689757+0000","last_became_peered":"2026-03-09T13:33:26.689757+0000","last_unstale":"2026-03-09T13:33:27.645854+0000","last_undegraded":"2026-03-09T13:33:27.645854+0000","last_fullsized":"2026-03-09T13:33:27.645854+0000","mapping_epoch":18,"log_start":"0'0","ondisk_log_start":"0'0","created":18,"last_epoch_clean":19,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T13:33:25.469795+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T13:33:25.469795+0000","last_clean_scrub_stamp":"2026-03-09T13:33:25.469795+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:36:19.548700+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0,2],"acting":[1,0,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]}],"pool_stats":[{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":3}],"osd_stats":[{"osd":2,"up_from":17,"seq":73014444035,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27596,"kb_used_data":620,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939828,"statfs":{"total":21470642176,"available":21442383872,"internally_reserved":0,"allocated":634880,"data_stored":509500,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":13,"seq":55834574853,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27588,"kb_used_data":620,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939836,"statfs":{"total":21470642176,"available":21442392064,"internally_reserved":0,"allocated":634880,"data_stored":509500,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":9,"seq":38654705671,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27592,"kb_used_data":620,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939832,"statfs":{"total":21470642176,"available":21442387968,"internally_reserved":0,"allocated":634880,"data_stored":509500,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[1,2],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-09T13:33:32.794 INFO:tasks.cephadm.ceph_manager.ceph:clean! 2026-03-09T13:33:32.794 INFO:tasks.ceph:Waiting until ceph cluster ceph is healthy... 2026-03-09T13:33:32.794 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy 2026-03-09T13:33:32.794 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 -- ceph health --format=json 2026-03-09T13:33:32.891 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:32 vm04 ceph-mon[50165]: pgmap v35: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T13:33:32.891 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:32 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/497844509' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T13:33:32.891 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:32 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/2073985746' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T13:33:32.963 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/mon.a/config 2026-03-09T13:33:33.197 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:33:33.197 INFO:teuthology.orchestra.run.vm04.stdout:{"status":"HEALTH_OK","checks":{},"mutes":[]} 2026-03-09T13:33:33.256 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy done 2026-03-09T13:33:33.256 INFO:tasks.cephadm:Setup complete, yielding 2026-03-09T13:33:33.256 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-09T13:33:33.258 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm04.local 2026-03-09T13:33:33.258 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 -- bash -c 'ceph osd pool create foo' 2026-03-09T13:33:33.418 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/mon.a/config 2026-03-09T13:33:33.662 INFO:teuthology.orchestra.run.vm04.stderr:pool 'foo' created 2026-03-09T13:33:33.733 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 -- bash -c 'rbd pool init foo' 2026-03-09T13:33:33.891 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:33 vm04 ceph-mon[50165]: from='client.14250 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T13:33:33.891 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:33 vm04 ceph-mon[50165]: from='client.14252 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T13:33:33.891 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:33 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/4173885434' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T13:33:33.891 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:33 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/113905748' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "foo"}]: dispatch 2026-03-09T13:33:33.900 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/mon.a/config 2026-03-09T13:33:35.140 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:34 vm04 ceph-mon[50165]: pgmap v36: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T13:33:35.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:34 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/113905748' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "foo"}]': finished 2026-03-09T13:33:35.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:34 vm04 ceph-mon[50165]: osdmap e21: 3 total, 3 up, 3 in 2026-03-09T13:33:35.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:34 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/2824237139' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "foo","app": "rbd"}]: dispatch 2026-03-09T13:33:36.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:35 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/2824237139' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "foo","app": "rbd"}]': finished 2026-03-09T13:33:36.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:35 vm04 ceph-mon[50165]: osdmap e22: 3 total, 3 up, 3 in 2026-03-09T13:33:36.858 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 -- bash -c 'ceph orch apply iscsi foo u p' 2026-03-09T13:33:37.022 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/mon.a/config 2026-03-09T13:33:37.046 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:36 vm04 ceph-mon[50165]: pgmap v39: 33 pgs: 23 unknown, 10 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T13:33:37.046 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:36 vm04 ceph-mon[50165]: osdmap e23: 3 total, 3 up, 3 in 2026-03-09T13:33:37.260 INFO:teuthology.orchestra.run.vm04.stdout:Scheduled iscsi.foo update... 2026-03-09T13:33:37.347 INFO:teuthology.run_tasks:Running task workunit... 2026-03-09T13:33:37.351 INFO:tasks.workunit:Pulling workunits from ref 569c3e99c9b32a51b4eaf08731c728f4513ed589 2026-03-09T13:33:37.352 INFO:tasks.workunit:Making a separate scratch dir for every client... 2026-03-09T13:33:37.352 DEBUG:teuthology.orchestra.run.vm04:> stat -- /home/ubuntu/cephtest/mnt.0 2026-03-09T13:33:37.377 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T13:33:37.378 INFO:teuthology.orchestra.run.vm04.stderr:stat: cannot statx '/home/ubuntu/cephtest/mnt.0': No such file or directory 2026-03-09T13:33:37.378 DEBUG:teuthology.orchestra.run.vm04:> mkdir -- /home/ubuntu/cephtest/mnt.0 2026-03-09T13:33:37.440 INFO:tasks.workunit:Created dir /home/ubuntu/cephtest/mnt.0 2026-03-09T13:33:37.441 DEBUG:teuthology.orchestra.run.vm04:> cd -- /home/ubuntu/cephtest/mnt.0 && mkdir -- client.0 2026-03-09T13:33:37.503 INFO:tasks.workunit:timeout=3h 2026-03-09T13:33:37.503 INFO:tasks.workunit:cleanup=True 2026-03-09T13:33:37.503 DEBUG:teuthology.orchestra.run.vm04:> rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone https://github.com/kshtsk/ceph.git /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0 && git checkout 569c3e99c9b32a51b4eaf08731c728f4513ed589 2026-03-09T13:33:37.593 INFO:tasks.workunit.client.0.vm04.stderr:Cloning into '/home/ubuntu/cephtest/clone.client.0'... 2026-03-09T13:33:37.835 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:37 vm04 ceph-mon[50165]: osdmap e24: 3 total, 3 up, 3 in 2026-03-09T13:33:37.835 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:37 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:33:37.835 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:37 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:33:37.835 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:37 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:33:37.835 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:37 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:33:37.835 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:37 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:33:37.835 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:37 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm04.nfjoun", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T13:33:37.835 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:37 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm04.nfjoun", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-09T13:33:37.835 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:37 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:33:38.889 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:38 vm04 ceph-mon[50165]: from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "foo", "api_user": "u", "api_password": "p", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:33:38.890 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:38 vm04 ceph-mon[50165]: Saving service iscsi.foo spec with placement count:1 2026-03-09T13:33:38.890 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:38 vm04 ceph-mon[50165]: Deploying daemon iscsi.foo.vm04.nfjoun on vm04 2026-03-09T13:33:38.890 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:38 vm04 ceph-mon[50165]: pgmap v42: 33 pgs: 11 unknown, 22 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T13:33:38.890 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:38 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:33:38.890 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:38 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:33:38.890 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:38 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:33:38.890 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:38 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:33:38.890 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:38 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:33:38.890 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:38 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:33:38.890 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:38 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:33:38.890 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:38 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:33:38.890 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:38 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T13:33:38.890 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:38 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T13:33:38.890 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:38 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:33:38.890 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:38 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm04"}]: dispatch 2026-03-09T13:33:38.890 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:38 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:33:38.890 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:38 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:33:38.890 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:38 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:33:38.890 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:38 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:33:38.890 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:38 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:33:38.890 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:38 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/2180189236' entity='client.iscsi.foo.vm04.nfjoun' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T13:33:39.891 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:39 vm04 ceph-mon[50165]: Checking pool "foo" exists for service iscsi.foo 2026-03-09T13:33:39.892 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:39 vm04 ceph-mon[50165]: Metadata not up to date on all hosts. Skipping non agent specs 2026-03-09T13:33:39.892 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:39 vm04 ceph-mon[50165]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T13:33:39.892 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:39 vm04 ceph-mon[50165]: Adding iSCSI gateway http://:@192.168.123.104:5000 to Dashboard 2026-03-09T13:33:39.892 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:39 vm04 ceph-mon[50165]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T13:33:39.892 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:39 vm04 ceph-mon[50165]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm04"}]: dispatch 2026-03-09T13:33:39.892 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:39 vm04 ceph-mon[50165]: Metadata not up to date on all hosts. Skipping non agent specs 2026-03-09T13:33:39.892 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:39 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/1878424205' entity='client.iscsi.foo.vm04.nfjoun' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.104:0/3592682446"}]: dispatch 2026-03-09T13:33:39.892 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:39 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:33:39.892 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:39 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:33:39.892 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:39 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:33:39.892 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:39 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:33:39.892 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:39 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:33:39.892 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:39 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:33:39.892 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:39 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:33:39.892 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:39 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:33:41.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:40 vm04 ceph-mon[50165]: pgmap v43: 33 pgs: 33 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T13:33:41.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:40 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/1878424205' entity='client.iscsi.foo.vm04.nfjoun' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.104:0/3592682446"}]': finished 2026-03-09T13:33:41.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:40 vm04 ceph-mon[50165]: mgrmap e14: a(active, since 58s) 2026-03-09T13:33:41.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:40 vm04 ceph-mon[50165]: osdmap e25: 3 total, 3 up, 3 in 2026-03-09T13:33:41.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:40 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/2496249439' entity='client.iscsi.foo.vm04.nfjoun' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.104:6801/3092625564"}]: dispatch 2026-03-09T13:33:42.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:41 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/2496249439' entity='client.iscsi.foo.vm04.nfjoun' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.104:6801/3092625564"}]': finished 2026-03-09T13:33:42.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:41 vm04 ceph-mon[50165]: osdmap e26: 3 total, 3 up, 3 in 2026-03-09T13:33:42.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:41 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/607916977' entity='client.iscsi.foo.vm04.nfjoun' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.104:0/1970530291"}]: dispatch 2026-03-09T13:33:42.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:41 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:33:43.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:42 vm04 ceph-mon[50165]: pgmap v46: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 361 B/s rd, 541 B/s wr, 2 op/s 2026-03-09T13:33:43.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:42 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/607916977' entity='client.iscsi.foo.vm04.nfjoun' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.104:0/1970530291"}]': finished 2026-03-09T13:33:43.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:42 vm04 ceph-mon[50165]: osdmap e27: 3 total, 3 up, 3 in 2026-03-09T13:33:43.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:42 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/2056179010' entity='client.iscsi.foo.vm04.nfjoun' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.104:0/3774991332"}]: dispatch 2026-03-09T13:33:43.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:42 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/2056179010' entity='client.iscsi.foo.vm04.nfjoun' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.104:0/3774991332"}]': finished 2026-03-09T13:33:43.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:42 vm04 ceph-mon[50165]: osdmap e28: 3 total, 3 up, 3 in 2026-03-09T13:33:43.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:42 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/1665383861' entity='client.iscsi.foo.vm04.nfjoun' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.104:0/1271325660"}]: dispatch 2026-03-09T13:33:44.391 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:44 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/1665383861' entity='client.iscsi.foo.vm04.nfjoun' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.104:0/1271325660"}]': finished 2026-03-09T13:33:44.391 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:44 vm04 ceph-mon[50165]: osdmap e29: 3 total, 3 up, 3 in 2026-03-09T13:33:44.391 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:44 vm04 ceph-mon[50165]: pgmap v50: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 574 B/s rd, 861 B/s wr, 3 op/s 2026-03-09T13:33:44.391 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:44 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/2837746606' entity='client.iscsi.foo.vm04.nfjoun' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.104:0/982753979"}]: dispatch 2026-03-09T13:33:45.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:45 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/2837746606' entity='client.iscsi.foo.vm04.nfjoun' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.104:0/982753979"}]': finished 2026-03-09T13:33:45.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:45 vm04 ceph-mon[50165]: osdmap e30: 3 total, 3 up, 3 in 2026-03-09T13:33:45.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:45 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/2550788183' entity='client.iscsi.foo.vm04.nfjoun' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.104:6800/3092625564"}]: dispatch 2026-03-09T13:33:46.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:46 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/2550788183' entity='client.iscsi.foo.vm04.nfjoun' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.104:6800/3092625564"}]': finished 2026-03-09T13:33:46.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:46 vm04 ceph-mon[50165]: osdmap e31: 3 total, 3 up, 3 in 2026-03-09T13:33:46.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:46 vm04 ceph-mon[50165]: pgmap v53: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-09T13:33:46.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:46 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/964813772' entity='client.iscsi.foo.vm04.nfjoun' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.104:6801/4248435815"}]: dispatch 2026-03-09T13:33:47.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:47 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/964813772' entity='client.iscsi.foo.vm04.nfjoun' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.104:6801/4248435815"}]': finished 2026-03-09T13:33:47.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:47 vm04 ceph-mon[50165]: osdmap e32: 3 total, 3 up, 3 in 2026-03-09T13:33:47.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:47 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/63210129' entity='client.iscsi.foo.vm04.nfjoun' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.104:0/3390144586"}]: dispatch 2026-03-09T13:33:48.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:48 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/63210129' entity='client.iscsi.foo.vm04.nfjoun' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.104:0/3390144586"}]': finished 2026-03-09T13:33:48.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:48 vm04 ceph-mon[50165]: osdmap e33: 3 total, 3 up, 3 in 2026-03-09T13:33:48.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:48 vm04 ceph-mon[50165]: pgmap v56: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T13:33:48.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:48 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/2422456266' entity='client.iscsi.foo.vm04.nfjoun' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.104:6800/4248435815"}]: dispatch 2026-03-09T13:33:49.640 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:49 vm04 ceph-mon[50165]: from='client.? 192.168.123.104:0/2422456266' entity='client.iscsi.foo.vm04.nfjoun' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.104:6800/4248435815"}]': finished 2026-03-09T13:33:49.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:49 vm04 ceph-mon[50165]: osdmap e34: 3 total, 3 up, 3 in 2026-03-09T13:33:49.641 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:49 vm04 ceph-mon[50165]: from='client.14267 -' entity='client.iscsi.foo.vm04.nfjoun' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T13:33:50.640 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:50 vm04 ceph-mon[50165]: pgmap v58: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T13:33:52.890 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:52 vm04 ceph-mon[50165]: pgmap v59: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T13:33:55.140 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:54 vm04 ceph-mon[50165]: pgmap v60: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 724 B/s rd, 0 op/s 2026-03-09T13:33:57.140 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:56 vm04 ceph-mon[50165]: pgmap v61: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T13:33:59.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:58 vm04 ceph-mon[50165]: pgmap v62: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T13:34:00.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:59 vm04 ceph-mon[50165]: from='client.14267 -' entity='client.iscsi.foo.vm04.nfjoun' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T13:34:00.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:59 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:34:00.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:59 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:34:00.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:33:59 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:34:01.390 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:34:00 vm04 ceph-mon[50165]: pgmap v63: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 928 B/s rd, 0 op/s 2026-03-09T13:34:01.391 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:34:00 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:34:01.391 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:34:00 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:34:01.391 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:34:00 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:34:03.391 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:34:02 vm04 ceph-mon[50165]: pgmap v64: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T13:34:05.391 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:34:04 vm04 ceph-mon[50165]: pgmap v65: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T13:34:07.391 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:34:06 vm04 ceph-mon[50165]: pgmap v66: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T13:34:09.391 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:34:08 vm04 ceph-mon[50165]: pgmap v67: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T13:34:10.390 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:34:09 vm04 ceph-mon[50165]: from='client.14267 -' entity='client.iscsi.foo.vm04.nfjoun' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T13:34:11.390 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:34:10 vm04 ceph-mon[50165]: pgmap v68: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T13:34:13.390 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:34:12 vm04 ceph-mon[50165]: pgmap v69: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T13:34:15.390 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:34:14 vm04 ceph-mon[50165]: pgmap v70: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T13:34:17.390 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:34:16 vm04 ceph-mon[50165]: pgmap v71: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T13:34:19.249 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:34:18 vm04 ceph-mon[50165]: pgmap v72: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T13:34:20.119 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:34:19 vm04 ceph-mon[50165]: from='client.14267 -' entity='client.iscsi.foo.vm04.nfjoun' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T13:34:21.390 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:34:20 vm04 ceph-mon[50165]: pgmap v73: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T13:34:21.391 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:34:20 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:34:21.391 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:34:20 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:34:21.391 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:34:20 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:34:23.390 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:34:22 vm04 ceph-mon[50165]: pgmap v74: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T13:34:24.640 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:34:24 vm04 ceph-mon[50165]: pgmap v75: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T13:34:26.640 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:34:26 vm04 ceph-mon[50165]: pgmap v76: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T13:34:28.926 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:34:28 vm04 ceph-mon[50165]: pgmap v77: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T13:34:29.032 INFO:tasks.workunit.client.0.vm04.stderr:Note: switching to '569c3e99c9b32a51b4eaf08731c728f4513ed589'. 2026-03-09T13:34:29.032 INFO:tasks.workunit.client.0.vm04.stderr: 2026-03-09T13:34:29.032 INFO:tasks.workunit.client.0.vm04.stderr:You are in 'detached HEAD' state. You can look around, make experimental 2026-03-09T13:34:29.032 INFO:tasks.workunit.client.0.vm04.stderr:changes and commit them, and you can discard any commits you make in this 2026-03-09T13:34:29.032 INFO:tasks.workunit.client.0.vm04.stderr:state without impacting any branches by switching back to a branch. 2026-03-09T13:34:29.032 INFO:tasks.workunit.client.0.vm04.stderr: 2026-03-09T13:34:29.032 INFO:tasks.workunit.client.0.vm04.stderr:If you want to create a new branch to retain commits you create, you may 2026-03-09T13:34:29.032 INFO:tasks.workunit.client.0.vm04.stderr:do so (now or later) by using -c with the switch command. Example: 2026-03-09T13:34:29.032 INFO:tasks.workunit.client.0.vm04.stderr: 2026-03-09T13:34:29.032 INFO:tasks.workunit.client.0.vm04.stderr: git switch -c 2026-03-09T13:34:29.032 INFO:tasks.workunit.client.0.vm04.stderr: 2026-03-09T13:34:29.032 INFO:tasks.workunit.client.0.vm04.stderr:Or undo this operation with: 2026-03-09T13:34:29.032 INFO:tasks.workunit.client.0.vm04.stderr: 2026-03-09T13:34:29.032 INFO:tasks.workunit.client.0.vm04.stderr: git switch - 2026-03-09T13:34:29.032 INFO:tasks.workunit.client.0.vm04.stderr: 2026-03-09T13:34:29.032 INFO:tasks.workunit.client.0.vm04.stderr:Turn off this advice by setting config variable advice.detachedHead to false 2026-03-09T13:34:29.032 INFO:tasks.workunit.client.0.vm04.stderr: 2026-03-09T13:34:29.032 INFO:tasks.workunit.client.0.vm04.stderr:HEAD is now at 569c3e99c9b qa/rgw: bucket notifications use pynose 2026-03-09T13:34:29.037 DEBUG:teuthology.orchestra.run.vm04:> cd -- /home/ubuntu/cephtest/clone.client.0/qa/workunits && if test -e Makefile ; then make ; fi && find -executable -type f -printf '%P\0' >/home/ubuntu/cephtest/workunits.list.client.0 2026-03-09T13:34:29.053 INFO:tasks.workunit.client.0.vm04.stdout:for d in direct_io fs ; do ( cd $d ; make all ) ; done 2026-03-09T13:34:29.055 INFO:tasks.workunit.client.0.vm04.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/direct_io' 2026-03-09T13:34:29.055 INFO:tasks.workunit.client.0.vm04.stdout:cc -Wall -Wextra -D_GNU_SOURCE direct_io_test.c -o direct_io_test 2026-03-09T13:34:29.101 INFO:tasks.workunit.client.0.vm04.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_sync_io.c -o test_sync_io 2026-03-09T13:34:29.135 INFO:tasks.workunit.client.0.vm04.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_short_dio_read.c -o test_short_dio_read 2026-03-09T13:34:29.164 INFO:tasks.workunit.client.0.vm04.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/direct_io' 2026-03-09T13:34:29.165 INFO:tasks.workunit.client.0.vm04.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/fs' 2026-03-09T13:34:29.165 INFO:tasks.workunit.client.0.vm04.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_o_trunc.c -o test_o_trunc 2026-03-09T13:34:29.193 INFO:tasks.workunit.client.0.vm04.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/fs' 2026-03-09T13:34:29.196 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T13:34:29.196 DEBUG:teuthology.orchestra.run.vm04:> dd if=/home/ubuntu/cephtest/workunits.list.client.0 of=/dev/stdout 2026-03-09T13:34:29.251 INFO:tasks.workunit:Running workunits matching cephadm/test_iscsi_pids_limit.sh on client.0... 2026-03-09T13:34:29.251 INFO:tasks.workunit:Running workunit cephadm/test_iscsi_pids_limit.sh... 2026-03-09T13:34:29.251 DEBUG:teuthology.orchestra.run.vm04:workunit test cephadm/test_iscsi_pids_limit.sh> mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=569c3e99c9b32a51b4eaf08731c728f4513ed589 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_iscsi_pids_limit.sh 2026-03-09T13:34:29.310 INFO:tasks.workunit.client.0.vm04.stderr:++ sudo podman ps -qa --filter=name=iscsi 2026-03-09T13:34:29.345 INFO:tasks.workunit.client.0.vm04.stderr:+ ISCSI_CONT_IDS='8b5aa6f32c13 2026-03-09T13:34:29.345 INFO:tasks.workunit.client.0.vm04.stderr:7e9581115351' 2026-03-09T13:34:29.346 INFO:tasks.workunit.client.0.vm04.stderr:++ wc -w 2026-03-09T13:34:29.346 INFO:tasks.workunit.client.0.vm04.stderr:++ echo 8b5aa6f32c13 7e9581115351 2026-03-09T13:34:29.347 INFO:tasks.workunit.client.0.vm04.stderr:+ CONT_COUNT=2 2026-03-09T13:34:29.347 INFO:tasks.workunit.client.0.vm04.stderr:+ test 2 -eq 2 2026-03-09T13:34:29.347 INFO:tasks.workunit.client.0.vm04.stderr:+ for i in ${ISCSI_CONT_IDS} 2026-03-09T13:34:29.347 INFO:tasks.workunit.client.0.vm04.stderr:++ sudo podman exec 8b5aa6f32c13 cat /sys/fs/cgroup/pids/pids.max 2026-03-09T13:34:29.386 INFO:tasks.workunit.client.0.vm04.stderr:cat: /sys/fs/cgroup/pids/pids.max: No such file or directory 2026-03-09T13:34:29.444 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' ']' 2026-03-09T13:34:29.444 INFO:tasks.workunit.client.0.vm04.stderr:++ sudo podman exec 8b5aa6f32c13 cat /sys/fs/cgroup/pids.max 2026-03-09T13:34:29.536 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' max ']' 2026-03-09T13:34:29.536 INFO:tasks.workunit.client.0.vm04.stderr:++ sudo podman exec 8b5aa6f32c13 cat /sys/fs/cgroup/pids.max 2026-03-09T13:34:29.574 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:34:29 vm04 ceph-mon[50165]: from='client.14267 -' entity='client.iscsi.foo.vm04.nfjoun' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T13:34:29.629 INFO:tasks.workunit.client.0.vm04.stderr:+ pid_limit=max 2026-03-09T13:34:29.629 INFO:tasks.workunit.client.0.vm04.stderr:+ test max == max 2026-03-09T13:34:29.629 INFO:tasks.workunit.client.0.vm04.stderr:+ for i in ${ISCSI_CONT_IDS} 2026-03-09T13:34:29.629 INFO:tasks.workunit.client.0.vm04.stderr:++ sudo podman exec 7e9581115351 cat /sys/fs/cgroup/pids/pids.max 2026-03-09T13:34:29.665 INFO:tasks.workunit.client.0.vm04.stderr:cat: /sys/fs/cgroup/pids/pids.max: No such file or directory 2026-03-09T13:34:29.716 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' ']' 2026-03-09T13:34:29.716 INFO:tasks.workunit.client.0.vm04.stderr:++ sudo podman exec 7e9581115351 cat /sys/fs/cgroup/pids.max 2026-03-09T13:34:29.806 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' max ']' 2026-03-09T13:34:29.806 INFO:tasks.workunit.client.0.vm04.stderr:++ sudo podman exec 7e9581115351 cat /sys/fs/cgroup/pids.max 2026-03-09T13:34:29.892 INFO:tasks.workunit.client.0.vm04.stderr:+ pid_limit=max 2026-03-09T13:34:29.892 INFO:tasks.workunit.client.0.vm04.stderr:+ test max == max 2026-03-09T13:34:29.892 INFO:tasks.workunit.client.0.vm04.stderr:+ for i in ${ISCSI_CONT_IDS} 2026-03-09T13:34:29.893 INFO:tasks.workunit.client.0.vm04.stderr:+ sudo podman exec 8b5aa6f32c13 /bin/sh -c 'for j in {0..20000}; do sleep 300 & done' 2026-03-09T13:34:30.891 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:34:30 vm04 ceph-mon[50165]: pgmap v78: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T13:34:32.891 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:34:32 vm04 ceph-mon[50165]: pgmap v79: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T13:34:34.890 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:34:34 vm04 ceph-mon[50165]: pgmap v80: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T13:34:36.891 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:34:36 vm04 ceph-mon[50165]: pgmap v81: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T13:34:38.891 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:34:38 vm04 ceph-mon[50165]: pgmap v82: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T13:34:39.891 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:34:39 vm04 ceph-mon[50165]: from='client.14267 -' entity='client.iscsi.foo.vm04.nfjoun' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T13:34:40.892 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:34:40 vm04 ceph-mon[50165]: pgmap v83: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T13:34:40.892 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:34:40 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:34:40.892 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:34:40 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:34:40.892 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:34:40 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:34:40.892 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:34:40 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:34:41.890 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:34:41 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:34:41.891 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:34:41 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:34:41.891 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:34:41 vm04 ceph-mon[50165]: from='mgr.14150 192.168.123.104:0/3604632600' entity='mgr.a' 2026-03-09T13:34:42.163 INFO:tasks.workunit.client.0.vm04.stderr:+ for i in ${ISCSI_CONT_IDS} 2026-03-09T13:34:42.163 INFO:tasks.workunit.client.0.vm04.stderr:+ sudo podman exec 7e9581115351 /bin/sh -c 'for j in {0..20000}; do sleep 300 & done' 2026-03-09T13:34:42.890 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:34:42 vm04 ceph-mon[50165]: pgmap v84: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T13:34:44.898 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:34:44 vm04 ceph-mon[50165]: pgmap v85: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T13:34:55.900 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:34:54 vm04 systemd[1]: ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@mgr.a.service: A process of this unit has been killed by the OOM killer. 2026-03-09T13:34:55.900 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:34:55 vm04 ceph-mon[50165]: pgmap v86: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T13:34:56.326 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:34:56 vm04 podman[91362]: 2026-03-09 13:34:56.079927299 +0000 UTC m=+0.124997260 container died 7649b74b64f9c2b1461ae0da14b272b83c9fe83e8aade6c992bf8a9f4cee2a43 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-09T13:34:56.326 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:34:56 vm04 podman[91362]: 2026-03-09 13:34:56.116429881 +0000 UTC m=+0.161499831 container remove 7649b74b64f9c2b1461ae0da14b272b83c9fe83e8aade6c992bf8a9f4cee2a43 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid) 2026-03-09T13:34:56.326 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:34:56 vm04 systemd[1]: ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@mgr.a.service: Main process exited, code=exited, status=137/n/a 2026-03-09T13:34:56.640 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:34:56 vm04 systemd[1]: ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@mgr.a.service: Failed with result 'exit-code'. 2026-03-09T13:34:56.641 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:34:56 vm04 systemd[1]: ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@mgr.a.service: Consumed 15.844s CPU time. 2026-03-09T13:35:06.942 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:35:06 vm04 systemd[1]: ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@mgr.a.service: Scheduled restart job, restart counter is at 1. 2026-03-09T13:35:07.894 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:35:07 vm04 systemd[1]: Stopped Ceph mgr.a for 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20. 2026-03-09T13:35:07.895 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:35:07 vm04 systemd[1]: ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@mgr.a.service: Consumed 15.844s CPU time. 2026-03-09T13:35:11.643 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:35:08 vm04 systemd[1]: Starting Ceph mgr.a for 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20... 2026-03-09T13:35:12.377 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:35:10 vm04 systemd[1]: ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@osd.2.service: A process of this unit has been killed by the OOM killer. 2026-03-09T13:35:25.399 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:35:23 vm04 systemd[1]: ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@osd.0.service: A process of this unit has been killed by the OOM killer. 2026-03-09T13:35:26.374 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:35:26 vm04 ceph-mon[50165]: osd.2 reported immediately failed by osd.1 2026-03-09T13:35:26.374 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:35:26 vm04 ceph-mon[50165]: osd.2 failed (root=default,host=vm04) (connection refused reported by osd.1) 2026-03-09T13:35:26.374 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:35:26 vm04 ceph-mon[50165]: osd.2 reported immediately failed by osd.1 2026-03-09T13:35:26.374 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:35:26 vm04 ceph-mon[50165]: osd.2 reported immediately failed by osd.0 2026-03-09T13:35:26.374 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:35:26 vm04 ceph-mon[50165]: osd.2 reported immediately failed by osd.1 2026-03-09T13:35:26.374 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:35:26 vm04 ceph-mon[50165]: osd.2 reported immediately failed by osd.0 2026-03-09T13:35:26.374 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:35:26 vm04 ceph-mon[50165]: osd.2 reported immediately failed by osd.1 2026-03-09T13:35:26.374 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:35:26 vm04 ceph-mon[50165]: osd.2 reported immediately failed by osd.0 2026-03-09T13:35:26.374 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:35:26 vm04 ceph-mon[50165]: osd.2 reported immediately failed by osd.1 2026-03-09T13:35:26.374 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:35:26 vm04 ceph-mon[50165]: osd.2 reported immediately failed by osd.0 2026-03-09T13:35:26.374 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:35:26 vm04 ceph-mon[50165]: osd.2 reported immediately failed by osd.1 2026-03-09T13:35:26.374 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:35:26 vm04 ceph-mon[50165]: osd.2 reported immediately failed by osd.0 2026-03-09T13:35:26.374 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:35:26 vm04 ceph-mon[50165]: osd.2 reported immediately failed by osd.1 2026-03-09T13:35:26.374 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:35:26 vm04 ceph-mon[50165]: osd.2 reported immediately failed by osd.0 2026-03-09T13:35:26.374 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:35:26 vm04 ceph-mon[50165]: osd.2 reported immediately failed by osd.1 2026-03-09T13:35:26.891 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:35:26 vm04 podman[93310]: 2026-03-09 13:35:26.624609866 +0000 UTC m=+9.380816892 container remove db07541d0ddc97e0110e4a81db9d6c73e1bac2833191bd8fb7cefeb701e26566 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-09T13:35:26.891 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:35:26 vm04 systemd[1]: ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@osd.2.service: Main process exited, code=exited, status=137/n/a 2026-03-09T13:35:27.394 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:35:27 vm04 ceph-mon[50165]: Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T13:35:27.394 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:35:27 vm04 ceph-mon[50165]: osdmap e35: 3 total, 2 up, 3 in 2026-03-09T13:35:27.394 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:35:27 vm04 ceph-mon[50165]: Manager daemon a is unresponsive. No standby daemons available. 2026-03-09T13:35:27.395 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:35:27 vm04 ceph-mon[50165]: osdmap e36: 3 total, 2 up, 3 in 2026-03-09T13:35:27.395 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:35:27 vm04 ceph-mon[50165]: mgrmap e15: no daemons active (since 0.130922s) 2026-03-09T13:35:27.395 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:35:27 vm04 ceph-mon[50165]: osd.0 reported immediately failed by osd.1 2026-03-09T13:35:27.395 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:35:27 vm04 ceph-mon[50165]: osd.0 failed (root=default,host=vm04) (connection refused reported by osd.1) 2026-03-09T13:35:27.395 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:35:27 vm04 ceph-mon[50165]: osd.0 reported immediately failed by osd.1 2026-03-09T13:35:27.395 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:35:27 vm04 ceph-mon[50165]: osd.0 reported immediately failed by osd.1 2026-03-09T13:35:45.399 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:35:44 vm04 ceph-mon[50165]: Health check update: 2 osds down (OSD_DOWN) 2026-03-09T13:35:45.399 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:35:45 vm04 ceph-mon[50165]: osdmap e37: 3 total, 1 up, 3 in 2026-03-09T13:35:45.866 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:35:45 vm04 podman[93414]: 2026-03-09 13:35:45.386587253 +0000 UTC m=+18.219924058 container remove 5aa3004156d989c7a8ce1f2bb25b271a0968c5193ccad51c446b53313c0c45a0 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) 2026-03-09T13:35:45.866 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:35:45 vm04 systemd[1]: ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@osd.0.service: Main process exited, code=exited, status=137/n/a 2026-03-09T13:35:46.897 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:35:46 vm04 podman[93739]: 2026-03-09 13:35:46.596673614 +0000 UTC m=+1.859984816 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T13:35:46.898 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:35:46 vm04 podman[93739]: 2026-03-09 13:35:46.767421216 +0000 UTC m=+2.030732409 container create d073ad13b238df5f64c672761b903f1d1cde9d01ab1d2654570546da034009b7 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-09T13:35:47.391 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:35:47 vm04 podman[93739]: 2026-03-09 13:35:47.115907468 +0000 UTC m=+2.379218650 container init d073ad13b238df5f64c672761b903f1d1cde9d01ab1d2654570546da034009b7 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-09T13:35:47.391 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:35:47 vm04 podman[93739]: 2026-03-09 13:35:47.139614975 +0000 UTC m=+2.402926168 container start d073ad13b238df5f64c672761b903f1d1cde9d01ab1d2654570546da034009b7 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223) 2026-03-09T13:35:47.391 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:35:47 vm04 bash[93739]: d073ad13b238df5f64c672761b903f1d1cde9d01ab1d2654570546da034009b7 2026-03-09T13:35:47.391 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:35:47 vm04 systemd[1]: Started Ceph mgr.a for 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20. 2026-03-09T13:35:48.125 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:35:48 vm04 podman[95293]: 2026-03-09 13:35:47.922314521 +0000 UTC m=+0.132085232 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T13:35:48.125 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:35:48 vm04 podman[95293]: 2026-03-09 13:35:48.018394092 +0000 UTC m=+0.228164883 container create a83dff13c033aacf97378238118200bc245c6eeefc586a514e6e562495b4955d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-2-deactivate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-09T13:35:48.125 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:35:47 vm04 ceph-mon[50165]: osdmap e38: 3 total, 1 up, 3 in 2026-03-09T13:35:48.391 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:35:48 vm04 podman[95293]: 2026-03-09 13:35:48.156509911 +0000 UTC m=+0.366280612 container init a83dff13c033aacf97378238118200bc245c6eeefc586a514e6e562495b4955d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-2-deactivate, io.buildah.version=1.41.3, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) 2026-03-09T13:35:48.391 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:35:48 vm04 podman[95293]: 2026-03-09 13:35:48.189706311 +0000 UTC m=+0.399477012 container start a83dff13c033aacf97378238118200bc245c6eeefc586a514e6e562495b4955d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-2-deactivate, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-09T13:35:48.391 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:35:48 vm04 podman[95293]: 2026-03-09 13:35:48.212013448 +0000 UTC m=+0.421784149 container attach a83dff13c033aacf97378238118200bc245c6eeefc586a514e6e562495b4955d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-2-deactivate, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , ceph=True, CEPH_REF=squid) 2026-03-09T13:35:49.391 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:35:48 vm04 podman[95655]: 2026-03-09 13:35:48.919600801 +0000 UTC m=+0.137853599 container create a5a7f511239a348ee3cb1f1b53f482abed2c2ea8a6b0d3ce60cd532eb8208f58 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-0-deactivate, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-09T13:35:49.391 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:35:48 vm04 podman[95655]: 2026-03-09 13:35:48.855982033 +0000 UTC m=+0.074234831 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T13:35:49.391 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:35:49 vm04 podman[95655]: 2026-03-09 13:35:49.035201213 +0000 UTC m=+0.253454011 container init a5a7f511239a348ee3cb1f1b53f482abed2c2ea8a6b0d3ce60cd532eb8208f58 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-0-deactivate, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3) 2026-03-09T13:35:49.391 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:35:49 vm04 podman[95655]: 2026-03-09 13:35:49.0851169 +0000 UTC m=+0.303369688 container start a5a7f511239a348ee3cb1f1b53f482abed2c2ea8a6b0d3ce60cd532eb8208f58 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-0-deactivate, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-09T13:35:49.391 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:35:49 vm04 podman[95655]: 2026-03-09 13:35:49.121795069 +0000 UTC m=+0.340047867 container attach a5a7f511239a348ee3cb1f1b53f482abed2c2ea8a6b0d3ce60cd532eb8208f58 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-0-deactivate, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-09T13:35:53.002 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:35:52 vm04 conmon[95892]: conmon a5a7f511239a348ee3cb : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a5a7f511239a348ee3cb1f1b53f482abed2c2ea8a6b0d3ce60cd532eb8208f58.scope/container/memory.events 2026-03-09T13:35:53.003 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:35:52 vm04 podman[95655]: 2026-03-09 13:35:52.802971191 +0000 UTC m=+4.021223989 container died a5a7f511239a348ee3cb1f1b53f482abed2c2ea8a6b0d3ce60cd532eb8208f58 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-0-deactivate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3) 2026-03-09T13:35:53.261 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:35:53 vm04 podman[95655]: 2026-03-09 13:35:53.252246901 +0000 UTC m=+4.470499699 container remove a5a7f511239a348ee3cb1f1b53f482abed2c2ea8a6b0d3ce60cd532eb8208f58 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-0-deactivate, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, org.label-schema.license=GPLv2) 2026-03-09T13:35:53.262 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:35:53 vm04 conmon[95356]: conmon a83dff13c033aacf9737 : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a83dff13c033aacf97378238118200bc245c6eeefc586a514e6e562495b4955d.scope/container/memory.events 2026-03-09T13:35:53.262 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:35:53 vm04 podman[95293]: 2026-03-09 13:35:53.017650229 +0000 UTC m=+5.227420920 container died a83dff13c033aacf97378238118200bc245c6eeefc586a514e6e562495b4955d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-2-deactivate, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, ceph=True) 2026-03-09T13:35:53.568 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:35:53 vm04 systemd[1]: ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@osd.0.service: Failed with result 'exit-code'. 2026-03-09T13:35:53.568 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:35:53 vm04 systemd[1]: ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@osd.0.service: Consumed 4.121s CPU time. 2026-03-09T13:35:53.568 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:35:53 vm04 podman[95293]: 2026-03-09 13:35:53.332015821 +0000 UTC m=+5.541786522 container remove a83dff13c033aacf97378238118200bc245c6eeefc586a514e6e562495b4955d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-2-deactivate, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3) 2026-03-09T13:35:53.568 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:35:53 vm04 systemd[1]: ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@osd.2.service: Failed with result 'exit-code'. 2026-03-09T13:35:53.568 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:35:53 vm04 systemd[1]: ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@osd.2.service: Consumed 4.572s CPU time. 2026-03-09T13:35:53.891 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:35:53 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[94752]: 2026-03-09T13:35:53.680+0000 7f3801295140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T13:35:54.256 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:35:54 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[94752]: 2026-03-09T13:35:54.014+0000 7f3801295140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T13:35:57.133 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:35:56 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[94752]: 2026-03-09T13:35:56.858+0000 7f3801295140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T13:35:57.640 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:35:57 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[94752]: 2026-03-09T13:35:57.508+0000 7f3801295140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T13:35:57.946 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:35:57 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[94752]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T13:35:57.946 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:35:57 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[94752]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T13:35:57.946 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:35:57 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[94752]: from numpy import show_config as show_numpy_config 2026-03-09T13:35:57.947 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:35:57 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[94752]: 2026-03-09T13:35:57.720+0000 7f3801295140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T13:35:57.947 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:35:57 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[94752]: 2026-03-09T13:35:57.788+0000 7f3801295140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T13:35:57.947 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:35:57 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[94752]: 2026-03-09T13:35:57.911+0000 7f3801295140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T13:35:59.691 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:35:59 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[94752]: 2026-03-09T13:35:59.489+0000 7f3801295140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T13:35:59.691 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:35:59 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[94752]: 2026-03-09T13:35:59.637+0000 7f3801295140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T13:36:00.141 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:35:59 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[94752]: 2026-03-09T13:35:59.689+0000 7f3801295140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T13:36:00.141 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:35:59 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[94752]: 2026-03-09T13:35:59.752+0000 7f3801295140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T13:36:00.141 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:35:59 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[94752]: 2026-03-09T13:35:59.797+0000 7f3801295140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T13:36:00.141 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:35:59 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[94752]: 2026-03-09T13:35:59.849+0000 7f3801295140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T13:36:00.436 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:36:00 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[94752]: 2026-03-09T13:36:00.160+0000 7f3801295140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T13:36:00.436 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:36:00 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[94752]: 2026-03-09T13:36:00.295+0000 7f3801295140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T13:36:01.074 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:36:00 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[94752]: 2026-03-09T13:36:00.787+0000 7f3801295140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T13:36:01.632 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:36:01 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[94752]: 2026-03-09T13:36:01.385+0000 7f3801295140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T13:36:01.632 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:36:01 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[94752]: 2026-03-09T13:36:01.465+0000 7f3801295140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T13:36:01.632 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:36:01 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[94752]: 2026-03-09T13:36:01.528+0000 7f3801295140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T13:36:01.891 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:36:01 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[94752]: 2026-03-09T13:36:01.648+0000 7f3801295140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T13:36:01.891 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:36:01 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[94752]: 2026-03-09T13:36:01.750+0000 7f3801295140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T13:36:01.891 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:36:01 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[94752]: 2026-03-09T13:36:01.879+0000 7f3801295140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T13:36:02.477 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:36:02 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[94752]: 2026-03-09T13:36:02.136+0000 7f3801295140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T13:36:02.477 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:36:02 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[94752]: 2026-03-09T13:36:02.475+0000 7f3801295140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T13:36:02.787 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:02 vm04 ceph-mon[50165]: Activating manager daemon a 2026-03-09T13:36:02.787 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:36:02 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a[94752]: 2026-03-09T13:36:02.524+0000 7f3801295140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T13:36:03.641 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:03 vm04 systemd[1]: ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@osd.0.service: Scheduled restart job, restart counter is at 1. 2026-03-09T13:36:03.641 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:03 vm04 systemd[1]: Stopped Ceph osd.0 for 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20. 2026-03-09T13:36:03.641 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:03 vm04 systemd[1]: ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@osd.0.service: Consumed 4.121s CPU time. 2026-03-09T13:36:03.641 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:03 vm04 systemd[1]: Starting Ceph osd.0 for 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20... 2026-03-09T13:36:03.641 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:03 vm04 systemd[1]: ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@osd.2.service: Scheduled restart job, restart counter is at 1. 2026-03-09T13:36:03.641 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:03 vm04 systemd[1]: Stopped Ceph osd.2 for 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20. 2026-03-09T13:36:03.641 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:03 vm04 systemd[1]: ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@osd.2.service: Consumed 4.572s CPU time. 2026-03-09T13:36:03.641 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:03 vm04 systemd[1]: Starting Ceph osd.2 for 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20... 2026-03-09T13:36:04.007 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:03 vm04 ceph-mon[50165]: mgrmap e16: a(active, starting, since 0.172848s) 2026-03-09T13:36:04.007 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:03 vm04 ceph-mon[50165]: from='mgr.14292 192.168.123.104:0/158322050' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T13:36:04.007 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:03 vm04 ceph-mon[50165]: from='mgr.14292 192.168.123.104:0/158322050' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T13:36:04.007 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:03 vm04 ceph-mon[50165]: from='mgr.14292 192.168.123.104:0/158322050' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T13:36:04.007 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:03 vm04 ceph-mon[50165]: from='mgr.14292 192.168.123.104:0/158322050' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T13:36:04.007 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:03 vm04 ceph-mon[50165]: from='mgr.14292 192.168.123.104:0/158322050' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-09T13:36:04.007 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:03 vm04 ceph-mon[50165]: Manager daemon a is now available 2026-03-09T13:36:04.007 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:03 vm04 ceph-mon[50165]: from='mgr.14292 192.168.123.104:0/158322050' entity='mgr.a' 2026-03-09T13:36:04.007 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:03 vm04 ceph-mon[50165]: from='mgr.14292 192.168.123.104:0/158322050' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:36:04.007 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:03 vm04 ceph-mon[50165]: from='mgr.14292 192.168.123.104:0/158322050' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-09T13:36:04.007 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:03 vm04 podman[102632]: 2026-03-09 13:36:03.888603808 +0000 UTC m=+0.101408487 container create d24ae4a073594393e01cbedd94ae3fff6c3dddb73aedddde0782185918eef7e1 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-0-activate, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-09T13:36:04.007 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:03 vm04 podman[102632]: 2026-03-09 13:36:03.832971059 +0000 UTC m=+0.045775728 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T13:36:04.007 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:03 vm04 podman[102626]: 2026-03-09 13:36:03.815738852 +0000 UTC m=+0.085866735 container create fe2da48f760eaea4256b1d8482895dd3307c084778d5b6620d14a25316320601 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-2-activate, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, CEPH_REF=squid) 2026-03-09T13:36:04.007 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:03 vm04 podman[102626]: 2026-03-09 13:36:03.792543002 +0000 UTC m=+0.062670894 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T13:36:04.007 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:03 vm04 podman[102626]: 2026-03-09 13:36:03.958018546 +0000 UTC m=+0.228146428 container init fe2da48f760eaea4256b1d8482895dd3307c084778d5b6620d14a25316320601 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-2-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=squid, ceph=True) 2026-03-09T13:36:04.007 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:03 vm04 podman[102626]: 2026-03-09 13:36:03.963735307 +0000 UTC m=+0.233863189 container start fe2da48f760eaea4256b1d8482895dd3307c084778d5b6620d14a25316320601 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-2-activate, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-09T13:36:04.007 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:03 vm04 podman[102626]: 2026-03-09 13:36:03.970497966 +0000 UTC m=+0.240625848 container attach fe2da48f760eaea4256b1d8482895dd3307c084778d5b6620d14a25316320601 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-2-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2) 2026-03-09T13:36:04.391 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:04 vm04 podman[102632]: 2026-03-09 13:36:04.041145591 +0000 UTC m=+0.253950270 container init d24ae4a073594393e01cbedd94ae3fff6c3dddb73aedddde0782185918eef7e1 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-0-activate, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS) 2026-03-09T13:36:04.391 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:04 vm04 podman[102632]: 2026-03-09 13:36:04.046093212 +0000 UTC m=+0.258897891 container start d24ae4a073594393e01cbedd94ae3fff6c3dddb73aedddde0782185918eef7e1 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0) 2026-03-09T13:36:04.391 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:04 vm04 podman[102632]: 2026-03-09 13:36:04.048094169 +0000 UTC m=+0.260898848 container attach d24ae4a073594393e01cbedd94ae3fff6c3dddb73aedddde0782185918eef7e1 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-0-activate, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2) 2026-03-09T13:36:04.774 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:04 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-0-activate[102681]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T13:36:04.774 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:04 vm04 bash[102632]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T13:36:04.774 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:04 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-0-activate[102681]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T13:36:04.774 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:04 vm04 bash[102632]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T13:36:04.776 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:04 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-2-activate[102665]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T13:36:04.776 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:04 vm04 bash[102626]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T13:36:04.776 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:04 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-2-activate[102665]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T13:36:04.776 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:04 vm04 bash[102626]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T13:36:05.052 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:04 vm04 ceph-mon[50165]: mgrmap e17: a(active, since 1.21548s) 2026-03-09T13:36:05.052 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:04 vm04 ceph-mon[50165]: pgmap v2: 33 pgs: 29 undersized+peered, 4 undersized+degraded+peered; 449 KiB data, 27 MiB used, 20 GiB / 20 GiB avail; 10/15 objects degraded (66.667%) 2026-03-09T13:36:06.042 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:05 vm04 ceph-mon[50165]: pgmap v3: 33 pgs: 29 undersized+peered, 4 undersized+degraded+peered; 449 KiB data, 27 MiB used, 20 GiB / 20 GiB avail; 10/15 objects degraded (66.667%) 2026-03-09T13:36:06.042 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:05 vm04 ceph-mon[50165]: Health check failed: Reduced data availability: 19 pgs inactive (PG_AVAILABILITY) 2026-03-09T13:36:06.042 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:05 vm04 ceph-mon[50165]: Health check failed: Degraded data redundancy: 10/15 objects degraded (66.667%), 4 pgs degraded (PG_DEGRADED) 2026-03-09T13:36:06.042 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:05 vm04 ceph-mon[50165]: mgrmap e18: a(active, since 2s) 2026-03-09T13:36:06.043 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:06 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-2-activate[102665]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-09T13:36:06.292 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:06 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-0-activate[102681]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-09T13:36:06.292 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:06 vm04 bash[102632]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-09T13:36:06.292 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:06 vm04 bash[102632]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T13:36:06.292 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:06 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-0-activate[102681]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T13:36:06.292 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:06 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-0-activate[102681]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T13:36:06.292 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:06 vm04 bash[102632]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T13:36:06.292 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:06 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-0-activate[102681]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 2026-03-09T13:36:06.292 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:06 vm04 bash[102632]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 2026-03-09T13:36:06.292 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:06 vm04 bash[102632]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-fa576dbc-75bb-44fa-8a1c-3c335860674a/osd-block-02b3e414-4f53-4659-8c7c-db2435785cbf --path /var/lib/ceph/osd/ceph-0 --no-mon-config 2026-03-09T13:36:06.292 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:06 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-0-activate[102681]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-fa576dbc-75bb-44fa-8a1c-3c335860674a/osd-block-02b3e414-4f53-4659-8c7c-db2435785cbf --path /var/lib/ceph/osd/ceph-0 --no-mon-config 2026-03-09T13:36:06.297 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:06 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-2-activate[102665]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T13:36:06.297 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:06 vm04 bash[102626]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-09T13:36:06.297 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:06 vm04 bash[102626]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T13:36:06.297 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:06 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-2-activate[102665]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T13:36:06.297 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:06 vm04 bash[102626]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T13:36:06.297 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:06 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-2-activate[102665]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 2026-03-09T13:36:06.297 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:06 vm04 bash[102626]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 2026-03-09T13:36:06.297 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:06 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-2-activate[102665]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-679aeb8e-fa96-45c0-a434-82cf2748e6ce/osd-block-c0e8bd08-9d2a-45fa-866c-c46c2a2146de --path /var/lib/ceph/osd/ceph-2 --no-mon-config 2026-03-09T13:36:06.297 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:06 vm04 bash[102626]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-679aeb8e-fa96-45c0-a434-82cf2748e6ce/osd-block-c0e8bd08-9d2a-45fa-866c-c46c2a2146de --path /var/lib/ceph/osd/ceph-2 --no-mon-config 2026-03-09T13:36:06.567 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:06 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-0-activate[102681]: Running command: /usr/bin/ln -snf /dev/ceph-fa576dbc-75bb-44fa-8a1c-3c335860674a/osd-block-02b3e414-4f53-4659-8c7c-db2435785cbf /var/lib/ceph/osd/ceph-0/block 2026-03-09T13:36:06.568 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:06 vm04 bash[102632]: Running command: /usr/bin/ln -snf /dev/ceph-fa576dbc-75bb-44fa-8a1c-3c335860674a/osd-block-02b3e414-4f53-4659-8c7c-db2435785cbf /var/lib/ceph/osd/ceph-0/block 2026-03-09T13:36:06.568 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:06 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-0-activate[102681]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block 2026-03-09T13:36:06.568 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:06 vm04 bash[102632]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block 2026-03-09T13:36:06.568 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:06 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-0-activate[102681]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 2026-03-09T13:36:06.568 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:06 vm04 bash[102632]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 2026-03-09T13:36:06.568 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:06 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-0-activate[102681]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 2026-03-09T13:36:06.568 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:06 vm04 bash[102632]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 2026-03-09T13:36:06.568 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:06 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-0-activate[102681]: --> ceph-volume lvm activate successful for osd ID: 0 2026-03-09T13:36:06.568 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:06 vm04 bash[102632]: --> ceph-volume lvm activate successful for osd ID: 0 2026-03-09T13:36:06.568 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:06 vm04 conmon[102681]: conmon d24ae4a073594393e01c : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d24ae4a073594393e01cbedd94ae3fff6c3dddb73aedddde0782185918eef7e1.scope/container/memory.events 2026-03-09T13:36:06.568 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:06 vm04 podman[102632]: 2026-03-09 13:36:06.384575929 +0000 UTC m=+2.597380608 container died d24ae4a073594393e01cbedd94ae3fff6c3dddb73aedddde0782185918eef7e1 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-0-activate, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-09T13:36:06.569 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:06 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-2-activate[102665]: Running command: /usr/bin/ln -snf /dev/ceph-679aeb8e-fa96-45c0-a434-82cf2748e6ce/osd-block-c0e8bd08-9d2a-45fa-866c-c46c2a2146de /var/lib/ceph/osd/ceph-2/block 2026-03-09T13:36:06.569 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:06 vm04 bash[102626]: Running command: /usr/bin/ln -snf /dev/ceph-679aeb8e-fa96-45c0-a434-82cf2748e6ce/osd-block-c0e8bd08-9d2a-45fa-866c-c46c2a2146de /var/lib/ceph/osd/ceph-2/block 2026-03-09T13:36:06.569 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:06 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-2-activate[102665]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block 2026-03-09T13:36:06.569 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:06 vm04 bash[102626]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block 2026-03-09T13:36:06.569 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:06 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-2-activate[102665]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2 2026-03-09T13:36:06.569 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:06 vm04 bash[102626]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2 2026-03-09T13:36:06.569 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:06 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-2-activate[102665]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 2026-03-09T13:36:06.569 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:06 vm04 bash[102626]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 2026-03-09T13:36:06.569 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:06 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-2-activate[102665]: --> ceph-volume lvm activate successful for osd ID: 2 2026-03-09T13:36:06.569 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:06 vm04 bash[102626]: --> ceph-volume lvm activate successful for osd ID: 2 2026-03-09T13:36:06.569 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:06 vm04 conmon[102665]: conmon fe2da48f760eaea4256b : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fe2da48f760eaea4256b1d8482895dd3307c084778d5b6620d14a25316320601.scope/container/memory.events 2026-03-09T13:36:06.569 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:06 vm04 podman[102626]: 2026-03-09 13:36:06.535825262 +0000 UTC m=+2.805953134 container died fe2da48f760eaea4256b1d8482895dd3307c084778d5b6620d14a25316320601 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-2-activate, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) 2026-03-09T13:36:06.897 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:06 vm04 podman[102632]: 2026-03-09 13:36:06.567594361 +0000 UTC m=+2.780399040 container remove d24ae4a073594393e01cbedd94ae3fff6c3dddb73aedddde0782185918eef7e1 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-0-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True) 2026-03-09T13:36:06.897 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:06 vm04 podman[102626]: 2026-03-09 13:36:06.635624917 +0000 UTC m=+2.905752799 container remove fe2da48f760eaea4256b1d8482895dd3307c084778d5b6620d14a25316320601 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-2-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-09T13:36:07.231 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:06 vm04 podman[104769]: 2026-03-09 13:36:06.969964066 +0000 UTC m=+0.063917649 container create e707402e4739cf8bd4012e78806069397684209220e48acc7efbd111ebb496a9 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3) 2026-03-09T13:36:07.231 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:07 vm04 podman[104769]: 2026-03-09 13:36:06.936904252 +0000 UTC m=+0.030857844 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T13:36:07.231 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:07 vm04 podman[104769]: 2026-03-09 13:36:07.055211912 +0000 UTC m=+0.149165494 container init e707402e4739cf8bd4012e78806069397684209220e48acc7efbd111ebb496a9 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-0, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True) 2026-03-09T13:36:07.231 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:07 vm04 podman[104769]: 2026-03-09 13:36:07.073355946 +0000 UTC m=+0.167309538 container start e707402e4739cf8bd4012e78806069397684209220e48acc7efbd111ebb496a9 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-0, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3) 2026-03-09T13:36:07.231 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:07 vm04 bash[104769]: e707402e4739cf8bd4012e78806069397684209220e48acc7efbd111ebb496a9 2026-03-09T13:36:07.231 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:07 vm04 systemd[1]: Started Ceph osd.0 for 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20. 2026-03-09T13:36:07.563 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:07 vm04 podman[104885]: 2026-03-09 13:36:07.230113861 +0000 UTC m=+0.146495997 container create 05217250408d82dbd517ac57a94d99dbd430acb323e2d67cdcdcddba49e65745 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default) 2026-03-09T13:36:07.564 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:07 vm04 podman[104885]: 2026-03-09 13:36:07.180928081 +0000 UTC m=+0.097310227 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T13:36:07.564 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:07 vm04 podman[104885]: 2026-03-09 13:36:07.34675375 +0000 UTC m=+0.263135905 container init 05217250408d82dbd517ac57a94d99dbd430acb323e2d67cdcdcddba49e65745 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-2, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default) 2026-03-09T13:36:07.564 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:07 vm04 podman[104885]: 2026-03-09 13:36:07.350891525 +0000 UTC m=+0.267273671 container start 05217250408d82dbd517ac57a94d99dbd430acb323e2d67cdcdcddba49e65745 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-2, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-09T13:36:07.564 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:07 vm04 bash[104885]: 05217250408d82dbd517ac57a94d99dbd430acb323e2d67cdcdcddba49e65745 2026-03-09T13:36:07.564 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:07 vm04 systemd[1]: Started Ceph osd.2 for 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20. 2026-03-09T13:36:07.564 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:07 vm04 ceph-mon[50165]: from='mgr.14292 192.168.123.104:0/158322050' entity='mgr.a' 2026-03-09T13:36:07.564 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:07 vm04 ceph-mon[50165]: from='mgr.14292 192.168.123.104:0/158322050' entity='mgr.a' 2026-03-09T13:36:07.564 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:07 vm04 ceph-mon[50165]: [09/Mar/2026:13:36:06] ENGINE Bus STARTING 2026-03-09T13:36:07.564 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:07 vm04 ceph-mon[50165]: pgmap v4: 33 pgs: 29 undersized+peered, 4 undersized+degraded+peered; 449 KiB data, 27 MiB used, 20 GiB / 20 GiB avail; 10/15 objects degraded (66.667%) 2026-03-09T13:36:07.564 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:07 vm04 ceph-mon[50165]: [09/Mar/2026:13:36:06] ENGINE Serving on http://192.168.123.104:8765 2026-03-09T13:36:07.564 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:07 vm04 ceph-mon[50165]: from='mgr.14292 192.168.123.104:0/158322050' entity='mgr.a' 2026-03-09T13:36:07.564 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:07 vm04 ceph-mon[50165]: from='mgr.14292 192.168.123.104:0/158322050' entity='mgr.a' 2026-03-09T13:36:07.564 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:07 vm04 ceph-mon[50165]: from='mgr.14292 192.168.123.104:0/158322050' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T13:36:07.564 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:07 vm04 ceph-mon[50165]: from='mgr.14292 192.168.123.104:0/158322050' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:36:07.564 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:07 vm04 ceph-mon[50165]: from='mgr.14292 192.168.123.104:0/158322050' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:36:07.890 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:07 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-0[104854]: 2026-03-09T13:36:07.560+0000 7efce82f2740 -1 Falling back to public interface 2026-03-09T13:36:08.641 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:08 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-2[105502]: 2026-03-09T13:36:08.222+0000 7f4b1d646740 -1 Falling back to public interface 2026-03-09T13:36:09.094 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:08 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-0[104854]: 2026-03-09T13:36:08.701+0000 7efce82f2740 -1 osd.0 34 log_to_monitors true 2026-03-09T13:36:09.094 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:08 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-0[104854]: 2026-03-09T13:36:08.858+0000 7efcdf89c640 -1 osd.0 34 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T13:36:09.094 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:08 vm04 ceph-mon[50165]: [09/Mar/2026:13:36:06] ENGINE Serving on https://192.168.123.104:7150 2026-03-09T13:36:09.094 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:08 vm04 ceph-mon[50165]: [09/Mar/2026:13:36:06] ENGINE Bus STARTED 2026-03-09T13:36:09.094 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:08 vm04 ceph-mon[50165]: [09/Mar/2026:13:36:06] ENGINE Client ('192.168.123.104', 51950) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T13:36:09.094 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:08 vm04 ceph-mon[50165]: Updating vm04:/etc/ceph/ceph.conf 2026-03-09T13:36:09.095 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:08 vm04 ceph-mon[50165]: Updating vm04:/var/lib/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/config/ceph.conf 2026-03-09T13:36:09.095 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:08 vm04 ceph-mon[50165]: Updating vm04:/etc/ceph/ceph.client.admin.keyring 2026-03-09T13:36:09.095 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:08 vm04 ceph-mon[50165]: Updating vm04:/var/lib/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/config/ceph.client.admin.keyring 2026-03-09T13:36:09.095 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:08 vm04 ceph-mon[50165]: from='mgr.14292 192.168.123.104:0/158322050' entity='mgr.a' 2026-03-09T13:36:09.095 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:08 vm04 ceph-mon[50165]: from='mgr.14292 192.168.123.104:0/158322050' entity='mgr.a' 2026-03-09T13:36:09.095 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:08 vm04 ceph-mon[50165]: pgmap v5: 33 pgs: 29 undersized+peered, 4 undersized+degraded+peered; 449 KiB data, 27 MiB used, 20 GiB / 20 GiB avail; 10/15 objects degraded (66.667%) 2026-03-09T13:36:09.095 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:08 vm04 ceph-mon[50165]: from='mgr.14292 192.168.123.104:0/158322050' entity='mgr.a' 2026-03-09T13:36:09.095 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:08 vm04 ceph-mon[50165]: from='mgr.14292 192.168.123.104:0/158322050' entity='mgr.a' 2026-03-09T13:36:09.095 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:08 vm04 ceph-mon[50165]: from='mgr.14292 192.168.123.104:0/158322050' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:36:09.095 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:08 vm04 ceph-mon[50165]: from='mgr.14292 192.168.123.104:0/158322050' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:36:09.095 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:08 vm04 ceph-mon[50165]: from='mgr.14292 192.168.123.104:0/158322050' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:36:09.095 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:08 vm04 ceph-mon[50165]: pgmap v6: 33 pgs: 29 undersized+peered, 4 undersized+degraded+peered; 449 KiB data, 27 MiB used, 20 GiB / 20 GiB avail; 10/15 objects degraded (66.667%) 2026-03-09T13:36:09.095 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:08 vm04 ceph-mon[50165]: pgmap v7: 33 pgs: 29 undersized+peered, 4 undersized+degraded+peered; 449 KiB data, 27 MiB used, 20 GiB / 20 GiB avail; 10/15 objects degraded (66.667%) 2026-03-09T13:36:09.095 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:08 vm04 ceph-mon[50165]: pgmap v8: 33 pgs: 29 undersized+peered, 4 undersized+degraded+peered; 449 KiB data, 27 MiB used, 20 GiB / 20 GiB avail; 10/15 objects degraded (66.667%) 2026-03-09T13:36:09.095 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:08 vm04 ceph-mon[50165]: pgmap v9: 33 pgs: 29 undersized+peered, 4 undersized+degraded+peered; 449 KiB data, 27 MiB used, 20 GiB / 20 GiB avail; 10/15 objects degraded (66.667%) 2026-03-09T13:36:09.095 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:08 vm04 ceph-mon[50165]: from='mgr.14292 192.168.123.104:0/158322050' entity='mgr.a' 2026-03-09T13:36:09.095 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:08 vm04 ceph-mon[50165]: from='osd.0 [v2:192.168.123.104:6802/515973454,v1:192.168.123.104:6803/515973454]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T13:36:09.391 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:09 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-2[105502]: 2026-03-09T13:36:09.092+0000 7f4b1d646740 -1 osd.2 34 log_to_monitors true 2026-03-09T13:36:10.141 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:09 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-2[105502]: 2026-03-09T13:36:09.876+0000 7f4b14bf0640 -1 osd.2 34 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T13:36:10.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:09 vm04 ceph-mon[50165]: Health check failed: 1 Cephadm Agent(s) are not reporting. Hosts may be offline (CEPHADM_AGENT_DOWN) 2026-03-09T13:36:10.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:09 vm04 ceph-mon[50165]: Health check failed: 2 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON) 2026-03-09T13:36:10.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:09 vm04 ceph-mon[50165]: from='osd.0 [v2:192.168.123.104:6802/515973454,v1:192.168.123.104:6803/515973454]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T13:36:10.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:09 vm04 ceph-mon[50165]: osdmap e39: 3 total, 1 up, 3 in 2026-03-09T13:36:10.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:09 vm04 ceph-mon[50165]: from='osd.0 [v2:192.168.123.104:6802/515973454,v1:192.168.123.104:6803/515973454]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T13:36:10.141 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:09 vm04 ceph-mon[50165]: from='osd.2 [v2:192.168.123.104:6818/3670961601,v1:192.168.123.104:6819/3670961601]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T13:36:10.985 INFO:tasks.workunit.client.0.vm04.stderr:+ for i in ${ISCSI_CONT_IDS} 2026-03-09T13:36:10.985 INFO:tasks.workunit.client.0.vm04.stderr:++ sudo podman exec 8b5aa6f32c13 /bin/sh -c 'ps -ef | grep -c sleep' 2026-03-09T13:36:11.021 INFO:tasks.workunit.client.0.vm04.stderr:Error: no container with name or ID "8b5aa6f32c13" found: no such container 2026-03-09T13:36:11.027 INFO:tasks.workunit.client.0.vm04.stderr:+ SLEEP_COUNT= 2026-03-09T13:36:11.027 DEBUG:teuthology.orchestra.run:got remote process result: 125 2026-03-09T13:36:11.027 INFO:tasks.workunit:Stopping ['cephadm/test_iscsi_pids_limit.sh', 'cephadm/test_iscsi_etc_hosts.sh', 'cephadm/test_iscsi_setup.sh'] on client.0... 2026-03-09T13:36:11.028 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/clone.client.0 2026-03-09T13:36:11.095 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:10 vm04 ceph-mon[50165]: pgmap v11: 33 pgs: 29 undersized+peered, 4 undersized+degraded+peered; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 10/15 objects degraded (66.667%) 2026-03-09T13:36:11.096 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:10 vm04 ceph-mon[50165]: Health check update: 1 osds down (OSD_DOWN) 2026-03-09T13:36:11.096 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:10 vm04 ceph-mon[50165]: from='osd.2 [v2:192.168.123.104:6818/3670961601,v1:192.168.123.104:6819/3670961601]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T13:36:11.096 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:10 vm04 ceph-mon[50165]: osd.0 [v2:192.168.123.104:6802/515973454,v1:192.168.123.104:6803/515973454] boot 2026-03-09T13:36:11.096 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:10 vm04 ceph-mon[50165]: osdmap e40: 3 total, 2 up, 3 in 2026-03-09T13:36:11.096 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:10 vm04 ceph-mon[50165]: from='osd.2 [v2:192.168.123.104:6818/3670961601,v1:192.168.123.104:6819/3670961601]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T13:36:11.096 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:10 vm04 ceph-mon[50165]: from='mgr.14292 192.168.123.104:0/158322050' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T13:36:11.472 ERROR:teuthology.run_tasks:Saw exception from tasks. Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 105, in run_tasks manager = run_one_task(taskname, ctx=ctx, config=config) File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 83, in run_one_task return task(**kwargs) File "/home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks/workunit.py", line 125, in task with parallel() as p: File "/home/teuthos/teuthology/teuthology/parallel.py", line 84, in __exit__ for result in self: File "/home/teuthos/teuthology/teuthology/parallel.py", line 98, in __next__ resurrect_traceback(result) File "/home/teuthos/teuthology/teuthology/parallel.py", line 30, in resurrect_traceback raise exc.exc_info[1] File "/home/teuthos/teuthology/teuthology/parallel.py", line 23, in capture_traceback return func(*args, **kwargs) File "/home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks/workunit.py", line 433, in _run_tests remote.run( File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed (workunit test cephadm/test_iscsi_pids_limit.sh) on vm04 with status 125: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=569c3e99c9b32a51b4eaf08731c728f4513ed589 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_iscsi_pids_limit.sh' 2026-03-09T13:36:11.473 DEBUG:teuthology.run_tasks:Unwinding manager cephadm 2026-03-09T13:36:11.476 INFO:tasks.cephadm:Teardown begin 2026-03-09T13:36:11.476 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T13:36:11.540 INFO:tasks.cephadm:Disabling cephadm mgr module 2026-03-09T13:36:11.540 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 -- ceph mgr module disable cephadm 2026-03-09T13:36:11.767 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/mon.a/config 2026-03-09T13:36:11.786 INFO:teuthology.orchestra.run.vm04.stderr:Error: statfs /etc/ceph/ceph.client.admin.keyring: no such file or directory 2026-03-09T13:36:11.819 DEBUG:teuthology.orchestra.run:got remote process result: 125 2026-03-09T13:36:11.819 INFO:tasks.cephadm:Cleaning up testdir ceph.* files... 2026-03-09T13:36:11.819 DEBUG:teuthology.orchestra.run.vm04:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-09T13:36:11.840 INFO:tasks.cephadm:Stopping all daemons... 2026-03-09T13:36:11.840 INFO:tasks.cephadm.mon.a:Stopping mon.a... 2026-03-09T13:36:11.840 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@mon.a 2026-03-09T13:36:12.052 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:11 vm04 ceph-mon[50165]: Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T13:36:12.052 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:11 vm04 ceph-mon[50165]: osd.2 [v2:192.168.123.104:6818/3670961601,v1:192.168.123.104:6819/3670961601] boot 2026-03-09T13:36:12.052 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:11 vm04 ceph-mon[50165]: osdmap e41: 3 total, 3 up, 3 in 2026-03-09T13:36:12.052 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:11 vm04 ceph-mon[50165]: from='mgr.14292 192.168.123.104:0/158322050' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T13:36:12.052 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:11 vm04 systemd[1]: Stopping Ceph mon.a for 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20... 2026-03-09T13:36:12.052 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:12 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mon-a[50161]: 2026-03-09T13:36:12.018+0000 7fa0a5ce4640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T13:36:12.052 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:12 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mon-a[50161]: 2026-03-09T13:36:12.018+0000 7fa0a5ce4640 -1 mon.a@0(leader) e1 *** Got Signal Terminated *** 2026-03-09T13:36:12.052 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:12 vm04 podman[112814]: 2026-03-09 13:36:12.051712552 +0000 UTC m=+0.047441818 container died 82c193e3313360005f221cd2027e13cebc93695c43ad101a30c0f592c4a1f945 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mon-a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20260223) 2026-03-09T13:36:12.155 DEBUG:teuthology.orchestra.run.vm04:> sudo pkill -f 'journalctl -f -n 0 -u ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@mon.a.service' 2026-03-09T13:36:12.393 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:12 vm04 podman[112814]: 2026-03-09 13:36:12.072058869 +0000 UTC m=+0.067788145 container remove 82c193e3313360005f221cd2027e13cebc93695c43ad101a30c0f592c4a1f945 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mon-a, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-09T13:36:12.393 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:12 vm04 bash[112814]: ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mon-a 2026-03-09T13:36:12.393 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:12 vm04 systemd[1]: ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@mon.a.service: Deactivated successfully. 2026-03-09T13:36:12.393 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:12 vm04 systemd[1]: Stopped Ceph mon.a for 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20. 2026-03-09T13:36:12.393 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 13:36:12 vm04 systemd[1]: ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@mon.a.service: Consumed 4.991s CPU time. 2026-03-09T13:36:12.654 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T13:36:12.654 INFO:tasks.cephadm.mon.a:Stopped mon.a 2026-03-09T13:36:12.654 INFO:tasks.cephadm.mgr.a:Stopping mgr.a... 2026-03-09T13:36:12.654 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@mgr.a 2026-03-09T13:36:12.948 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:36:12 vm04 systemd[1]: Stopping Ceph mgr.a for 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20... 2026-03-09T13:36:12.948 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:36:12 vm04 podman[112917]: 2026-03-09 13:36:12.815035567 +0000 UTC m=+0.075849864 container died d073ad13b238df5f64c672761b903f1d1cde9d01ab1d2654570546da034009b7 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, ceph=True) 2026-03-09T13:36:13.030 DEBUG:teuthology.orchestra.run.vm04:> sudo pkill -f 'journalctl -f -n 0 -u ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@mgr.a.service' 2026-03-09T13:36:13.391 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:36:12 vm04 podman[112917]: 2026-03-09 13:36:12.961349551 +0000 UTC m=+0.222163848 container remove d073ad13b238df5f64c672761b903f1d1cde9d01ab1d2654570546da034009b7 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-09T13:36:13.391 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:36:12 vm04 bash[112917]: ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-mgr-a 2026-03-09T13:36:13.391 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:36:13 vm04 systemd[1]: ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@mgr.a.service: Deactivated successfully. 2026-03-09T13:36:13.391 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:36:13 vm04 systemd[1]: Stopped Ceph mgr.a for 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20. 2026-03-09T13:36:13.391 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 09 13:36:13 vm04 systemd[1]: ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@mgr.a.service: Consumed 6.368s CPU time. 2026-03-09T13:36:13.460 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T13:36:13.460 INFO:tasks.cephadm.mgr.a:Stopped mgr.a 2026-03-09T13:36:13.460 INFO:tasks.cephadm.osd.0:Stopping osd.0... 2026-03-09T13:36:13.460 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@osd.0 2026-03-09T13:36:13.891 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:13 vm04 systemd[1]: Stopping Ceph osd.0 for 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20... 2026-03-09T13:36:13.891 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:13 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-0[104854]: 2026-03-09T13:36:13.621+0000 7efce5287640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T13:36:13.891 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:13 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-0[104854]: 2026-03-09T13:36:13.621+0000 7efce5287640 -1 osd.0 42 *** Got signal Terminated *** 2026-03-09T13:36:13.891 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:13 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-0[104854]: 2026-03-09T13:36:13.621+0000 7efce5287640 -1 osd.0 42 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T13:36:18.891 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:18 vm04 podman[113018]: 2026-03-09 13:36:18.636786486 +0000 UTC m=+5.041832321 container died e707402e4739cf8bd4012e78806069397684209220e48acc7efbd111ebb496a9 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=squid) 2026-03-09T13:36:18.891 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:18 vm04 podman[113018]: 2026-03-09 13:36:18.66267204 +0000 UTC m=+5.067717875 container remove e707402e4739cf8bd4012e78806069397684209220e48acc7efbd111ebb496a9 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid) 2026-03-09T13:36:18.891 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:18 vm04 bash[113018]: ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-0 2026-03-09T13:36:18.891 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:18 vm04 podman[113086]: 2026-03-09 13:36:18.830264971 +0000 UTC m=+0.016820106 container create c6576bf465098104b28fa09b7a1cc52dac076f2e15d68bd746e8015263f53138 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-0-deactivate, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-09T13:36:18.891 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:18 vm04 podman[113086]: 2026-03-09 13:36:18.868518928 +0000 UTC m=+0.055074063 container init c6576bf465098104b28fa09b7a1cc52dac076f2e15d68bd746e8015263f53138 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-0-deactivate, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-09T13:36:18.891 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:18 vm04 podman[113086]: 2026-03-09 13:36:18.871317856 +0000 UTC m=+0.057872991 container start c6576bf465098104b28fa09b7a1cc52dac076f2e15d68bd746e8015263f53138 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-0-deactivate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True) 2026-03-09T13:36:18.891 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:18 vm04 podman[113086]: 2026-03-09 13:36:18.875326471 +0000 UTC m=+0.061881595 container attach c6576bf465098104b28fa09b7a1cc52dac076f2e15d68bd746e8015263f53138 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-0-deactivate, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True) 2026-03-09T13:36:19.029 DEBUG:teuthology.orchestra.run.vm04:> sudo pkill -f 'journalctl -f -n 0 -u ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@osd.0.service' 2026-03-09T13:36:19.391 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:18 vm04 podman[113086]: 2026-03-09 13:36:18.822832298 +0000 UTC m=+0.009387443 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T13:36:19.391 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:18 vm04 conmon[113097]: conmon c6576bf465098104b28f : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c6576bf465098104b28fa09b7a1cc52dac076f2e15d68bd746e8015263f53138.scope/container/memory.events 2026-03-09T13:36:19.391 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:18 vm04 podman[113086]: 2026-03-09 13:36:18.992490044 +0000 UTC m=+0.179045169 container died c6576bf465098104b28fa09b7a1cc52dac076f2e15d68bd746e8015263f53138 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-0-deactivate, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0) 2026-03-09T13:36:19.391 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:19 vm04 podman[113086]: 2026-03-09 13:36:19.010782234 +0000 UTC m=+0.197337369 container remove c6576bf465098104b28fa09b7a1cc52dac076f2e15d68bd746e8015263f53138 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-0-deactivate, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223) 2026-03-09T13:36:19.391 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:19 vm04 systemd[1]: ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@osd.0.service: Deactivated successfully. 2026-03-09T13:36:19.391 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 13:36:19 vm04 systemd[1]: Stopped Ceph osd.0 for 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20. 2026-03-09T13:36:19.447 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T13:36:19.447 INFO:tasks.cephadm.osd.0:Stopped osd.0 2026-03-09T13:36:19.447 INFO:tasks.cephadm.osd.1:Stopping osd.1... 2026-03-09T13:36:19.447 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@osd.1 2026-03-09T13:36:19.891 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 13:36:19 vm04 systemd[1]: Stopping Ceph osd.1 for 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20... 2026-03-09T13:36:19.891 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 13:36:19 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-1[60366]: 2026-03-09T13:36:19.588+0000 7f905f7f5640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.1 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T13:36:19.891 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 13:36:19 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-1[60366]: 2026-03-09T13:36:19.588+0000 7f905f7f5640 -1 osd.1 42 *** Got signal Terminated *** 2026-03-09T13:36:19.891 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 13:36:19 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-1[60366]: 2026-03-09T13:36:19.588+0000 7f905f7f5640 -1 osd.1 42 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T13:36:24.891 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 13:36:24 vm04 podman[113205]: 2026-03-09 13:36:24.605234309 +0000 UTC m=+5.028880784 container died 174ced76c9666f8fd48ff7c4e747238804f81af53f7d01ab7e41e2f541e28c92 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-09T13:36:24.891 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 13:36:24 vm04 podman[113205]: 2026-03-09 13:36:24.633414409 +0000 UTC m=+5.057060884 container remove 174ced76c9666f8fd48ff7c4e747238804f81af53f7d01ab7e41e2f541e28c92 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20260223, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) 2026-03-09T13:36:24.891 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 13:36:24 vm04 bash[113205]: ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-1 2026-03-09T13:36:24.891 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 13:36:24 vm04 podman[113641]: 2026-03-09 13:36:24.799788588 +0000 UTC m=+0.015888131 container create 75606cfef6f1724c2012c13e7533905b65752b9fd3dca8d3fb59d92654e898c2 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-1-deactivate, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-09T13:36:24.891 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 13:36:24 vm04 podman[113641]: 2026-03-09 13:36:24.838423578 +0000 UTC m=+0.054523131 container init 75606cfef6f1724c2012c13e7533905b65752b9fd3dca8d3fb59d92654e898c2 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-1-deactivate, CEPH_REF=squid, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) 2026-03-09T13:36:24.891 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 13:36:24 vm04 podman[113641]: 2026-03-09 13:36:24.841200065 +0000 UTC m=+0.057299608 container start 75606cfef6f1724c2012c13e7533905b65752b9fd3dca8d3fb59d92654e898c2 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-1-deactivate, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3) 2026-03-09T13:36:24.891 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 13:36:24 vm04 podman[113641]: 2026-03-09 13:36:24.842029798 +0000 UTC m=+0.058129331 container attach 75606cfef6f1724c2012c13e7533905b65752b9fd3dca8d3fb59d92654e898c2 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-1-deactivate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid) 2026-03-09T13:36:25.003 DEBUG:teuthology.orchestra.run.vm04:> sudo pkill -f 'journalctl -f -n 0 -u ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@osd.1.service' 2026-03-09T13:36:25.391 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 13:36:24 vm04 podman[113641]: 2026-03-09 13:36:24.793629649 +0000 UTC m=+0.009729192 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T13:36:25.392 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 13:36:24 vm04 podman[113641]: 2026-03-09 13:36:24.97197126 +0000 UTC m=+0.188070794 container died 75606cfef6f1724c2012c13e7533905b65752b9fd3dca8d3fb59d92654e898c2 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-1-deactivate, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-09T13:36:25.392 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 13:36:24 vm04 podman[113641]: 2026-03-09 13:36:24.988742033 +0000 UTC m=+0.204841567 container remove 75606cfef6f1724c2012c13e7533905b65752b9fd3dca8d3fb59d92654e898c2 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-1-deactivate, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-09T13:36:25.392 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 13:36:25 vm04 systemd[1]: ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@osd.1.service: Deactivated successfully. 2026-03-09T13:36:25.392 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 13:36:25 vm04 systemd[1]: Stopped Ceph osd.1 for 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20. 2026-03-09T13:36:25.392 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 13:36:25 vm04 systemd[1]: ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@osd.1.service: Consumed 4.498s CPU time. 2026-03-09T13:36:25.428 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T13:36:25.428 INFO:tasks.cephadm.osd.1:Stopped osd.1 2026-03-09T13:36:25.428 INFO:tasks.cephadm.osd.2:Stopping osd.2... 2026-03-09T13:36:25.428 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@osd.2 2026-03-09T13:36:25.891 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:25 vm04 systemd[1]: Stopping Ceph osd.2 for 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20... 2026-03-09T13:36:25.891 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:25 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-2[105502]: 2026-03-09T13:36:25.659+0000 7f4b1a5db640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.2 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T13:36:25.891 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:25 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-2[105502]: 2026-03-09T13:36:25.659+0000 7f4b1a5db640 -1 osd.2 42 *** Got signal Terminated *** 2026-03-09T13:36:25.891 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:25 vm04 ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-2[105502]: 2026-03-09T13:36:25.659+0000 7f4b1a5db640 -1 osd.2 42 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T13:36:30.955 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:30 vm04 podman[113737]: 2026-03-09 13:36:30.688493415 +0000 UTC m=+5.126040835 container died 05217250408d82dbd517ac57a94d99dbd430acb323e2d67cdcdcddba49e65745 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0) 2026-03-09T13:36:30.955 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:30 vm04 podman[113737]: 2026-03-09 13:36:30.717651575 +0000 UTC m=+5.155198995 container remove 05217250408d82dbd517ac57a94d99dbd430acb323e2d67cdcdcddba49e65745 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-09T13:36:30.955 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:30 vm04 bash[113737]: ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-2 2026-03-09T13:36:30.955 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:30 vm04 podman[113803]: 2026-03-09 13:36:30.886987458 +0000 UTC m=+0.016921005 container create e03fd46e94d50a67ca0018b634ce61712d7caf761975faf3f149b75eb5fd8bed (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-2-deactivate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-09T13:36:30.955 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:30 vm04 podman[113803]: 2026-03-09 13:36:30.932123807 +0000 UTC m=+0.062057354 container init e03fd46e94d50a67ca0018b634ce61712d7caf761975faf3f149b75eb5fd8bed (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-2-deactivate, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-09T13:36:30.955 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:30 vm04 podman[113803]: 2026-03-09 13:36:30.937225247 +0000 UTC m=+0.067158794 container start e03fd46e94d50a67ca0018b634ce61712d7caf761975faf3f149b75eb5fd8bed (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-2-deactivate, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-09T13:36:30.955 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:30 vm04 podman[113803]: 2026-03-09 13:36:30.938347156 +0000 UTC m=+0.068280703 container attach e03fd46e94d50a67ca0018b634ce61712d7caf761975faf3f149b75eb5fd8bed (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-2-deactivate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) 2026-03-09T13:36:31.112 DEBUG:teuthology.orchestra.run.vm04:> sudo pkill -f 'journalctl -f -n 0 -u ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@osd.2.service' 2026-03-09T13:36:31.248 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:30 vm04 podman[113803]: 2026-03-09 13:36:30.879913587 +0000 UTC m=+0.009847134 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T13:36:31.248 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:31 vm04 podman[113803]: 2026-03-09 13:36:31.072003814 +0000 UTC m=+0.201937361 container died e03fd46e94d50a67ca0018b634ce61712d7caf761975faf3f149b75eb5fd8bed (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-2-deactivate, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-09T13:36:31.248 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:31 vm04 podman[113803]: 2026-03-09 13:36:31.088390918 +0000 UTC m=+0.218324465 container remove e03fd46e94d50a67ca0018b634ce61712d7caf761975faf3f149b75eb5fd8bed (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20-osd-2-deactivate, ceph=True, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default) 2026-03-09T13:36:31.248 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:31 vm04 systemd[1]: ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@osd.2.service: Deactivated successfully. 2026-03-09T13:36:31.248 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:31 vm04 systemd[1]: Stopped Ceph osd.2 for 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20. 2026-03-09T13:36:31.248 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 13:36:31 vm04 systemd[1]: ceph-2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20@osd.2.service: Consumed 1.041s CPU time. 2026-03-09T13:36:31.538 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T13:36:31.538 INFO:tasks.cephadm.osd.2:Stopped osd.2 2026-03-09T13:36:31.538 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 --force --keep-logs 2026-03-09T13:36:31.661 INFO:teuthology.orchestra.run.vm04.stdout:Deleting cluster with fsid: 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 2026-03-09T13:36:45.439 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T13:36:45.467 INFO:tasks.cephadm:Archiving crash dumps... 2026-03-09T13:36:45.467 DEBUG:teuthology.misc:Transferring archived files from vm04:/var/lib/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/crash to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/492/remote/vm04/crash 2026-03-09T13:36:45.467 DEBUG:teuthology.orchestra.run.vm04:> sudo tar c -f - -C /var/lib/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/crash -- . 2026-03-09T13:36:45.532 INFO:teuthology.orchestra.run.vm04.stderr:tar: /var/lib/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/crash: Cannot open: No such file or directory 2026-03-09T13:36:45.533 INFO:teuthology.orchestra.run.vm04.stderr:tar: Error is not recoverable: exiting now 2026-03-09T13:36:45.534 INFO:tasks.cephadm:Checking cluster log for badness... 2026-03-09T13:36:45.534 DEBUG:teuthology.orchestra.run.vm04:> sudo egrep '\[ERR\]|\[WRN\]|\[SEC\]' /var/log/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/ceph.log | egrep CEPHADM_ | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v CEPHADM_FAILED_DAEMON | head -n 1 2026-03-09T13:36:45.601 INFO:teuthology.orchestra.run.vm04.stdout:2026-03-09T13:36:08.799130+0000 mon.a (mon.0) 463 : cluster [WRN] Health check failed: 1 Cephadm Agent(s) are not reporting. Hosts may be offline (CEPHADM_AGENT_DOWN) 2026-03-09T13:36:45.601 WARNING:tasks.cephadm:Found errors (ERR|WRN|SEC) in cluster log 2026-03-09T13:36:45.601 INFO:tasks.cephadm:Compressing logs... 2026-03-09T13:36:45.601 DEBUG:teuthology.orchestra.run.vm04:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T13:36:45.667 INFO:teuthology.orchestra.run.vm04.stderr:find: gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-09T13:36:45.667 INFO:teuthology.orchestra.run.vm04.stderr:‘/var/log/rbd-target-api’: No such file or directory 2026-03-09T13:36:45.669 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /var/log/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/ceph-mon.a.log 2026-03-09T13:36:45.669 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /var/log/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/ceph.log 2026-03-09T13:36:45.670 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/cephadm.log: /var/log/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/ceph-mon.a.log: gzip -5 --verbose -- /var/log/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/ceph-mgr.a.log 2026-03-09T13:36:45.672 INFO:teuthology.orchestra.run.vm04.stderr: 87.5% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-09T13:36:45.672 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/ceph.log: 84.6% -- replaced with /var/log/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/ceph.log.gz 2026-03-09T13:36:45.673 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /var/log/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/ceph.audit.log 2026-03-09T13:36:45.682 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/ceph-mgr.a.log: gzip -5 --verbose -- /var/log/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/ceph.cephadm.log 2026-03-09T13:36:45.683 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/ceph.audit.log: 89.3% -- replaced with /var/log/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/ceph.audit.log.gz 2026-03-09T13:36:45.683 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /var/log/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/ceph-volume.log 2026-03-09T13:36:45.688 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/ceph.cephadm.log: 78.4% -- replaced with /var/log/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/ceph.cephadm.log.gz 2026-03-09T13:36:45.690 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /var/log/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/ceph-osd.0.log 2026-03-09T13:36:45.703 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/ceph-volume.log: gzip -5 --verbose -- /var/log/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/ceph-osd.1.log 2026-03-09T13:36:45.708 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/ceph-osd.0.log: gzip -5 --verbose -- /var/log/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/ceph-osd.2.log 2026-03-09T13:36:45.708 INFO:teuthology.orchestra.run.vm04.stderr: 95.7% -- replaced with /var/log/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/ceph-volume.log.gz 2026-03-09T13:36:45.717 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/ceph-osd.1.log: gzip -5 --verbose -- /var/log/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/tcmu-runner.log 2026-03-09T13:36:45.730 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/ceph-osd.2.log: /var/log/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/tcmu-runner.log: 62.9% -- replaced with /var/log/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/tcmu-runner.log.gz 2026-03-09T13:36:45.734 INFO:teuthology.orchestra.run.vm04.stderr: 89.1% -- replaced with /var/log/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/ceph-mgr.a.log.gz 2026-03-09T13:36:45.803 INFO:teuthology.orchestra.run.vm04.stderr: 91.4% -- replaced with /var/log/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/ceph-mon.a.log.gz 2026-03-09T13:36:45.804 INFO:teuthology.orchestra.run.vm04.stderr: 94.9% -- replaced with /var/log/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/ceph-osd.2.log.gz 2026-03-09T13:36:45.823 INFO:teuthology.orchestra.run.vm04.stderr: 95.2% -- replaced with /var/log/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/ceph-osd.0.log.gz 2026-03-09T13:36:45.859 INFO:teuthology.orchestra.run.vm04.stderr: 95.1% -- replaced with /var/log/ceph/2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20/ceph-osd.1.log.gz 2026-03-09T13:36:45.860 INFO:teuthology.orchestra.run.vm04.stderr: 2026-03-09T13:36:45.860 INFO:teuthology.orchestra.run.vm04.stderr:real 0m0.203s 2026-03-09T13:36:45.861 INFO:teuthology.orchestra.run.vm04.stderr:user 0m0.326s 2026-03-09T13:36:45.861 INFO:teuthology.orchestra.run.vm04.stderr:sys 0m0.031s 2026-03-09T13:36:45.861 INFO:tasks.cephadm:Archiving logs... 2026-03-09T13:36:45.861 DEBUG:teuthology.misc:Transferring archived files from vm04:/var/log/ceph to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/492/remote/vm04/log 2026-03-09T13:36:45.861 DEBUG:teuthology.orchestra.run.vm04:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-09T13:36:45.946 INFO:tasks.cephadm:Removing cluster... 2026-03-09T13:36:45.946 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 --force 2026-03-09T13:36:46.110 INFO:teuthology.orchestra.run.vm04.stdout:Deleting cluster with fsid: 2b9d5904-1bbc-11f1-8bb4-a1ce0f711a20 2026-03-09T13:36:46.322 INFO:tasks.cephadm:Removing cephadm ... 2026-03-09T13:36:46.323 DEBUG:teuthology.orchestra.run.vm04:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-09T13:36:46.337 INFO:tasks.cephadm:Teardown complete 2026-03-09T13:36:46.337 DEBUG:teuthology.run_tasks:Unwinding manager install 2026-03-09T13:36:46.339 INFO:teuthology.task.install.util:Removing shipped files: /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer... 2026-03-09T13:36:46.339 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-09T13:36:46.410 INFO:teuthology.task.install.rpm:Removing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd on rpm system. 2026-03-09T13:36:46.410 DEBUG:teuthology.orchestra.run.vm04:> 2026-03-09T13:36:46.410 DEBUG:teuthology.orchestra.run.vm04:> for d in ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd ; do 2026-03-09T13:36:46.410 DEBUG:teuthology.orchestra.run.vm04:> sudo yum -y remove $d || true 2026-03-09T13:36:46.410 DEBUG:teuthology.orchestra.run.vm04:> done 2026-03-09T13:36:46.754 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T13:36:46.755 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T13:36:46.755 INFO:teuthology.orchestra.run.vm04.stdout: Package Arch Version Repository Size 2026-03-09T13:36:46.755 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T13:36:46.756 INFO:teuthology.orchestra.run.vm04.stdout:Removing: 2026-03-09T13:36:46.756 INFO:teuthology.orchestra.run.vm04.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 39 M 2026-03-09T13:36:46.756 INFO:teuthology.orchestra.run.vm04.stdout:Removing unused dependencies: 2026-03-09T13:36:46.756 INFO:teuthology.orchestra.run.vm04.stdout: mailcap noarch 2.1.49-5.el9 @baseos 78 k 2026-03-09T13:36:46.756 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:36:46.756 INFO:teuthology.orchestra.run.vm04.stdout:Transaction Summary 2026-03-09T13:36:46.756 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T13:36:46.756 INFO:teuthology.orchestra.run.vm04.stdout:Remove 2 Packages 2026-03-09T13:36:46.756 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:36:46.756 INFO:teuthology.orchestra.run.vm04.stdout:Freed space: 39 M 2026-03-09T13:36:46.756 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction check 2026-03-09T13:36:46.760 INFO:teuthology.orchestra.run.vm04.stdout:Transaction check succeeded. 2026-03-09T13:36:46.760 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction test 2026-03-09T13:36:46.797 INFO:teuthology.orchestra.run.vm04.stdout:Transaction test succeeded. 2026-03-09T13:36:46.797 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction 2026-03-09T13:36:46.829 INFO:teuthology.orchestra.run.vm04.stdout: Preparing : 1/1 2026-03-09T13:36:46.851 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-09T13:36:46.852 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T13:36:46.852 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-09T13:36:46.852 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-radosgw.target". 2026-03-09T13:36:46.852 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-radosgw.target". 2026-03-09T13:36:46.852 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:36:46.854 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-09T13:36:46.862 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-09T13:36:46.876 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-09T13:36:46.940 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: mailcap-2.1.49-5.el9.noarch 2/2 2026-03-09T13:36:46.940 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-09T13:36:46.983 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-09T13:36:46.983 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:36:46.983 INFO:teuthology.orchestra.run.vm04.stdout:Removed: 2026-03-09T13:36:46.983 INFO:teuthology.orchestra.run.vm04.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 mailcap-2.1.49-5.el9.noarch 2026-03-09T13:36:46.983 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:36:46.983 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T13:36:47.184 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T13:36:47.185 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T13:36:47.185 INFO:teuthology.orchestra.run.vm04.stdout: Package Arch Version Repository Size 2026-03-09T13:36:47.185 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T13:36:47.185 INFO:teuthology.orchestra.run.vm04.stdout:Removing: 2026-03-09T13:36:47.185 INFO:teuthology.orchestra.run.vm04.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 210 M 2026-03-09T13:36:47.185 INFO:teuthology.orchestra.run.vm04.stdout:Removing unused dependencies: 2026-03-09T13:36:47.185 INFO:teuthology.orchestra.run.vm04.stdout: libxslt x86_64 1.1.34-12.el9 @appstream 743 k 2026-03-09T13:36:47.185 INFO:teuthology.orchestra.run.vm04.stdout: socat x86_64 1.7.4.1-8.el9 @appstream 1.1 M 2026-03-09T13:36:47.185 INFO:teuthology.orchestra.run.vm04.stdout: xmlstarlet x86_64 1.6.1-20.el9 @appstream 195 k 2026-03-09T13:36:47.185 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:36:47.185 INFO:teuthology.orchestra.run.vm04.stdout:Transaction Summary 2026-03-09T13:36:47.185 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T13:36:47.185 INFO:teuthology.orchestra.run.vm04.stdout:Remove 4 Packages 2026-03-09T13:36:47.185 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:36:47.185 INFO:teuthology.orchestra.run.vm04.stdout:Freed space: 212 M 2026-03-09T13:36:47.185 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction check 2026-03-09T13:36:47.188 INFO:teuthology.orchestra.run.vm04.stdout:Transaction check succeeded. 2026-03-09T13:36:47.188 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction test 2026-03-09T13:36:47.222 INFO:teuthology.orchestra.run.vm04.stdout:Transaction test succeeded. 2026-03-09T13:36:47.223 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction 2026-03-09T13:36:47.279 INFO:teuthology.orchestra.run.vm04.stdout: Preparing : 1/1 2026-03-09T13:36:47.285 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-09T13:36:47.287 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : xmlstarlet-1.6.1-20.el9.x86_64 2/4 2026-03-09T13:36:47.291 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libxslt-1.1.34-12.el9.x86_64 3/4 2026-03-09T13:36:47.307 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-09T13:36:47.378 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-09T13:36:47.378 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-09T13:36:47.378 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 2/4 2026-03-09T13:36:47.378 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 3/4 2026-03-09T13:36:47.434 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 4/4 2026-03-09T13:36:47.434 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:36:47.434 INFO:teuthology.orchestra.run.vm04.stdout:Removed: 2026-03-09T13:36:47.434 INFO:teuthology.orchestra.run.vm04.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 libxslt-1.1.34-12.el9.x86_64 2026-03-09T13:36:47.434 INFO:teuthology.orchestra.run.vm04.stdout: socat-1.7.4.1-8.el9.x86_64 xmlstarlet-1.6.1-20.el9.x86_64 2026-03-09T13:36:47.434 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:36:47.434 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T13:36:47.639 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T13:36:47.640 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T13:36:47.640 INFO:teuthology.orchestra.run.vm04.stdout: Package Arch Version Repository Size 2026-03-09T13:36:47.640 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T13:36:47.640 INFO:teuthology.orchestra.run.vm04.stdout:Removing: 2026-03-09T13:36:47.640 INFO:teuthology.orchestra.run.vm04.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 0 2026-03-09T13:36:47.640 INFO:teuthology.orchestra.run.vm04.stdout:Removing unused dependencies: 2026-03-09T13:36:47.640 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 7.5 M 2026-03-09T13:36:47.640 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 18 M 2026-03-09T13:36:47.640 INFO:teuthology.orchestra.run.vm04.stdout: lua x86_64 5.4.4-4.el9 @appstream 593 k 2026-03-09T13:36:47.640 INFO:teuthology.orchestra.run.vm04.stdout: lua-devel x86_64 5.4.4-4.el9 @crb 49 k 2026-03-09T13:36:47.640 INFO:teuthology.orchestra.run.vm04.stdout: luarocks noarch 3.9.2-5.el9 @epel 692 k 2026-03-09T13:36:47.640 INFO:teuthology.orchestra.run.vm04.stdout: unzip x86_64 6.0-59.el9 @baseos 389 k 2026-03-09T13:36:47.640 INFO:teuthology.orchestra.run.vm04.stdout: zip x86_64 3.0-35.el9 @baseos 724 k 2026-03-09T13:36:47.640 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:36:47.640 INFO:teuthology.orchestra.run.vm04.stdout:Transaction Summary 2026-03-09T13:36:47.640 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T13:36:47.640 INFO:teuthology.orchestra.run.vm04.stdout:Remove 8 Packages 2026-03-09T13:36:47.640 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:36:47.640 INFO:teuthology.orchestra.run.vm04.stdout:Freed space: 28 M 2026-03-09T13:36:47.640 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction check 2026-03-09T13:36:47.643 INFO:teuthology.orchestra.run.vm04.stdout:Transaction check succeeded. 2026-03-09T13:36:47.643 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction test 2026-03-09T13:36:47.679 INFO:teuthology.orchestra.run.vm04.stdout:Transaction test succeeded. 2026-03-09T13:36:47.679 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction 2026-03-09T13:36:47.721 INFO:teuthology.orchestra.run.vm04.stdout: Preparing : 1/1 2026-03-09T13:36:47.726 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-09T13:36:47.730 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : luarocks-3.9.2-5.el9.noarch 2/8 2026-03-09T13:36:47.732 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : lua-devel-5.4.4-4.el9.x86_64 3/8 2026-03-09T13:36:47.735 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : zip-3.0-35.el9.x86_64 4/8 2026-03-09T13:36:47.738 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : unzip-6.0-59.el9.x86_64 5/8 2026-03-09T13:36:47.740 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : lua-5.4.4-4.el9.x86_64 6/8 2026-03-09T13:36:47.760 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-09T13:36:47.760 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T13:36:47.760 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-09T13:36:47.760 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mds.target". 2026-03-09T13:36:47.760 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mds.target". 2026-03-09T13:36:47.760 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:36:47.760 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-09T13:36:47.767 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-09T13:36:47.787 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-09T13:36:47.787 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T13:36:47.787 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-09T13:36:47.787 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mon.target". 2026-03-09T13:36:47.787 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mon.target". 2026-03-09T13:36:47.787 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:36:47.787 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-09T13:36:47.869 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-09T13:36:47.869 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-09T13:36:47.869 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2/8 2026-03-09T13:36:47.869 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 3/8 2026-03-09T13:36:47.869 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : lua-5.4.4-4.el9.x86_64 4/8 2026-03-09T13:36:47.869 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 5/8 2026-03-09T13:36:47.869 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 6/8 2026-03-09T13:36:47.869 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : unzip-6.0-59.el9.x86_64 7/8 2026-03-09T13:36:47.923 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : zip-3.0-35.el9.x86_64 8/8 2026-03-09T13:36:47.923 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:36:47.923 INFO:teuthology.orchestra.run.vm04.stdout:Removed: 2026-03-09T13:36:47.923 INFO:teuthology.orchestra.run.vm04.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:36:47.923 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:36:47.923 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:36:47.923 INFO:teuthology.orchestra.run.vm04.stdout: lua-5.4.4-4.el9.x86_64 2026-03-09T13:36:47.923 INFO:teuthology.orchestra.run.vm04.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-09T13:36:47.923 INFO:teuthology.orchestra.run.vm04.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-09T13:36:47.923 INFO:teuthology.orchestra.run.vm04.stdout: unzip-6.0-59.el9.x86_64 2026-03-09T13:36:47.923 INFO:teuthology.orchestra.run.vm04.stdout: zip-3.0-35.el9.x86_64 2026-03-09T13:36:47.923 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:36:47.923 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T13:36:48.133 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout:=========================================================================================== 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout: Package Arch Version Repository Size 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout:=========================================================================================== 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout:Removing: 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 23 M 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout:Removing dependent packages: 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 431 k 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.4 M 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 806 k 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 88 M 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 66 M 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 563 k 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 59 M 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.4 M 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout:Removing unused dependencies: 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout: abseil-cpp x86_64 20211102.0-4.el9 @epel 1.9 M 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 85 M 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 628 k 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.5 M 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 52 k 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 138 k 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout: cryptsetup x86_64 2.8.1-3.el9 @baseos 770 k 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout: flexiblas x86_64 3.0.4-9.el9 @appstream 68 k 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 @appstream 11 M 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 @appstream 39 k 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout: gperftools-libs x86_64 2.9.1-3.el9 @epel 1.4 M 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout: grpc-data noarch 1.46.7-10.el9 @epel 13 k 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout: ledmon-libs x86_64 1.1.0-3.el9 @baseos 80 k 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 425 k 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout: libconfig x86_64 1.7.2-9.el9 @baseos 220 k 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout: libgfortran x86_64 11.5.0-14.el9 @baseos 2.8 M 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout: liboath x86_64 2.6.12-1.el9 @epel 94 k 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout: libquadmath x86_64 11.5.0-14.el9 @baseos 330 k 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.6 M 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout: libstoragemgmt x86_64 1.10.1-1.el9 @appstream 685 k 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout: libunwind x86_64 1.6.2-1.el9 @epel 170 k 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout: openblas x86_64 0.3.29-1.el9 @appstream 112 k 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout: openblas-openmp x86_64 0.3.29-1.el9 @appstream 46 M 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout: pciutils x86_64 3.7.0-7.el9 @baseos 216 k 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout: protobuf x86_64 3.14.0-17.el9 @appstream 3.5 M 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout: protobuf-compiler x86_64 3.14.0-17.el9 @crb 2.9 M 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout: python3-asyncssh noarch 2.13.2-5.el9 @epel 3.9 M 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout: python3-autocommand noarch 2.2.2-8.el9 @epel 82 k 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout: python3-babel noarch 2.9.1-2.el9 @appstream 27 M 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 @epel 254 k 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout: python3-bcrypt x86_64 3.2.2-1.el9 @epel 87 k 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools noarch 4.2.4-1.el9 @epel 93 k 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 702 k 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout: python3-certifi noarch 2023.05.07-4.el9 @epel 6.3 k 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout: python3-cffi x86_64 1.14.5-5.el9 @baseos 1.0 M 2026-03-09T13:36:48.139 INFO:teuthology.orchestra.run.vm04.stdout: python3-chardet noarch 4.0.0-5.el9 @anaconda 1.4 M 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-cheroot noarch 10.0.1-4.el9 @epel 682 k 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-cherrypy noarch 18.6.1-2.el9 @epel 1.1 M 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-cryptography x86_64 36.0.1-5.el9 @baseos 4.5 M 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-devel x86_64 3.9.25-3.el9 @appstream 765 k 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-google-auth noarch 1:2.45.0-1.el9 @epel 1.4 M 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-grpcio x86_64 1.46.7-10.el9 @epel 6.7 M 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 @epel 418 k 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-idna noarch 2.10-7.el9.1 @anaconda 513 k 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco noarch 8.2.1-3.el9 @epel 3.7 k 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 @epel 24 k 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 @epel 55 k 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-context noarch 6.0.1-3.el9 @epel 31 k 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 @epel 33 k 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-text noarch 4.0.0-2.el9 @epel 51 k 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-jinja2 noarch 2.11.3-8.el9 @appstream 1.1 M 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-jsonpatch noarch 1.21-16.el9 @koji-override-0 55 k 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-jsonpointer noarch 2.0-4.el9 @koji-override-0 34 k 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 @epel 21 M 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 @appstream 832 k 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-logutils noarch 0.3.5-21.el9 @epel 126 k 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-mako noarch 1.1.4-6.el9 @appstream 534 k 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-markupsafe x86_64 1.1.1-12.el9 @appstream 60 k 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-more-itertools noarch 8.12.0-2.el9 @epel 378 k 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-natsort noarch 7.1.1-5.el9 @epel 215 k 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-numpy x86_64 1:1.23.5-2.el9 @appstream 30 M 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 @appstream 1.7 M 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-oauthlib noarch 3.1.1-5.el9 @koji-override-0 888 k 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-packaging noarch 20.9-5.el9 @appstream 248 k 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-pecan noarch 1.4.2-3.el9 @epel 1.3 M 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-ply noarch 3.11-14.el9 @baseos 430 k 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-portend noarch 3.1.0-2.el9 @epel 20 k 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-prettytable noarch 0.7.2-27.el9 @koji-override-0 166 k 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-protobuf noarch 3.14.0-17.el9 @appstream 1.4 M 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 @epel 389 k 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyasn1 noarch 0.4.8-7.el9 @appstream 622 k 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 @appstream 1.0 M 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-pycparser noarch 2.20-6.el9 @baseos 745 k 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-pysocks noarch 1.7.1-12.el9 @anaconda 88 k 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-pytz noarch 2021.1-5.el9 @koji-override-0 176 k 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze-lru noarch 0.7-16.el9 @epel 83 k 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-requests noarch 2.25.1-10.el9 @baseos 405 k 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 @appstream 119 k 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-routes noarch 2.5.1-5.el9 @epel 459 k 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-rsa noarch 4.9-2.el9 @epel 202 k 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-scipy x86_64 1.9.3-2.el9 @appstream 76 M 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-tempora noarch 5.0.0-2.el9 @epel 96 k 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-toml noarch 0.10.2-6.el9 @appstream 99 k 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-typing-extensions noarch 4.15.0-1.el9 @epel 447 k 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-urllib3 noarch 1.26.5-7.el9 @baseos 746 k 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-webob noarch 1.8.8-2.el9 @epel 1.2 M 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket-client noarch 1.2.3-2.el9 @epel 319 k 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 @epel 1.9 M 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: python3-zc-lockfile noarch 2.0-10.el9 @epel 35 k 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: qatlib x86_64 25.08.0-2.el9 @appstream 639 k 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: qatlib-service x86_64 25.08.0-2.el9 @appstream 69 k 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: qatzip-libs x86_64 1.3.1-1.el9 @appstream 148 k 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout:Transaction Summary 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout:=========================================================================================== 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout:Remove 102 Packages 2026-03-09T13:36:48.140 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:36:48.141 INFO:teuthology.orchestra.run.vm04.stdout:Freed space: 613 M 2026-03-09T13:36:48.141 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction check 2026-03-09T13:36:48.168 INFO:teuthology.orchestra.run.vm04.stdout:Transaction check succeeded. 2026-03-09T13:36:48.168 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction test 2026-03-09T13:36:48.332 INFO:teuthology.orchestra.run.vm04.stdout:Transaction test succeeded. 2026-03-09T13:36:48.333 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction 2026-03-09T13:36:48.510 INFO:teuthology.orchestra.run.vm04.stdout: Preparing : 1/1 2026-03-09T13:36:48.510 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/102 2026-03-09T13:36:48.519 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/102 2026-03-09T13:36:48.541 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-09T13:36:48.541 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T13:36:48.541 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-09T13:36:48.541 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mgr.target". 2026-03-09T13:36:48.541 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mgr.target". 2026-03-09T13:36:48.541 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:36:48.541 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-09T13:36:48.558 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-09T13:36:48.578 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 3/102 2026-03-09T13:36:48.579 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/102 2026-03-09T13:36:48.656 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/102 2026-03-09T13:36:48.666 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-kubernetes-1:26.1.0-3.el9.noarch 5/102 2026-03-09T13:36:48.670 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-requests-oauthlib-1.3.0-12.el9.noarch 6/102 2026-03-09T13:36:48.670 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-09T13:36:48.683 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-09T13:36:48.691 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-cherrypy-18.6.1-2.el9.noarch 8/102 2026-03-09T13:36:48.695 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-cheroot-10.0.1-4.el9.noarch 9/102 2026-03-09T13:36:48.704 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-grpcio-tools-1.46.7-10.el9.x86_64 10/102 2026-03-09T13:36:48.708 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-grpcio-1.46.7-10.el9.x86_64 11/102 2026-03-09T13:36:48.730 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-09T13:36:48.730 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T13:36:48.730 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-09T13:36:48.730 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-osd.target". 2026-03-09T13:36:48.730 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-osd.target". 2026-03-09T13:36:48.730 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:36:48.731 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-09T13:36:48.740 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-09T13:36:48.760 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-09T13:36:48.760 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T13:36:48.760 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-09T13:36:48.760 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:36:48.770 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-09T13:36:48.780 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-09T13:36:48.783 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-jaraco-collections-3.0.0-8.el9.noarch 14/102 2026-03-09T13:36:48.788 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-jaraco-text-4.0.0-2.el9.noarch 15/102 2026-03-09T13:36:48.793 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-jinja2-2.11.3-8.el9.noarch 16/102 2026-03-09T13:36:48.804 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-requests-2.25.1-10.el9.noarch 17/102 2026-03-09T13:36:48.817 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-google-auth-1:2.45.0-1.el9.noarch 18/102 2026-03-09T13:36:48.823 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-pecan-1.4.2-3.el9.noarch 19/102 2026-03-09T13:36:48.849 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-rsa-4.9-2.el9.noarch 20/102 2026-03-09T13:36:48.857 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-pyasn1-modules-0.4.8-7.el9.noarch 21/102 2026-03-09T13:36:48.894 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-urllib3-1.26.5-7.el9.noarch 22/102 2026-03-09T13:36:48.903 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-babel-2.9.1-2.el9.noarch 23/102 2026-03-09T13:36:48.906 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-jaraco-classes-3.2.1-5.el9.noarch 24/102 2026-03-09T13:36:48.915 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-pyOpenSSL-21.0.0-1.el9.noarch 25/102 2026-03-09T13:36:48.922 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-asyncssh-2.13.2-5.el9.noarch 26/102 2026-03-09T13:36:48.922 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/102 2026-03-09T13:36:48.930 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/102 2026-03-09T13:36:49.041 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-jsonpatch-1.21-16.el9.noarch 28/102 2026-03-09T13:36:49.058 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-scipy-1.9.3-2.el9.x86_64 29/102 2026-03-09T13:36:49.071 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-09T13:36:49.071 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/multi-user.target.wants/libstoragemgmt.service". 2026-03-09T13:36:49.071 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:36:49.073 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-09T13:36:49.112 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-09T13:36:49.130 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 31/102 2026-03-09T13:36:49.135 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-cryptography-36.0.1-5.el9.x86_64 32/102 2026-03-09T13:36:49.138 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : protobuf-compiler-3.14.0-17.el9.x86_64 33/102 2026-03-09T13:36:49.141 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-bcrypt-3.2.2-1.el9.x86_64 34/102 2026-03-09T13:36:49.163 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-09T13:36:49.163 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T13:36:49.163 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-09T13:36:49.163 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target". 2026-03-09T13:36:49.163 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target". 2026-03-09T13:36:49.163 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:36:49.164 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-09T13:36:49.178 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-09T13:36:49.181 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-mako-1.1.4-6.el9.noarch 36/102 2026-03-09T13:36:49.183 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-jaraco-context-6.0.1-3.el9.noarch 37/102 2026-03-09T13:36:49.187 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-portend-3.1.0-2.el9.noarch 38/102 2026-03-09T13:36:49.190 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-tempora-5.0.0-2.el9.noarch 39/102 2026-03-09T13:36:49.194 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-jaraco-functools-3.5.0-2.el9.noarch 40/102 2026-03-09T13:36:49.199 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-routes-2.5.1-5.el9.noarch 41/102 2026-03-09T13:36:49.203 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-cffi-1.14.5-5.el9.x86_64 42/102 2026-03-09T13:36:49.259 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-pycparser-2.20-6.el9.noarch 43/102 2026-03-09T13:36:49.271 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-numpy-1:1.23.5-2.el9.x86_64 44/102 2026-03-09T13:36:49.273 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : flexiblas-netlib-3.0.4-9.el9.x86_64 45/102 2026-03-09T13:36:49.275 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 46/102 2026-03-09T13:36:49.277 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : openblas-openmp-0.3.29-1.el9.x86_64 47/102 2026-03-09T13:36:49.280 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libgfortran-11.5.0-14.el9.x86_64 48/102 2026-03-09T13:36:49.283 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 49/102 2026-03-09T13:36:49.303 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-09T13:36:49.303 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T13:36:49.303 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-09T13:36:49.303 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:36:49.303 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-09T13:36:49.310 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-09T13:36:49.312 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : openblas-0.3.29-1.el9.x86_64 51/102 2026-03-09T13:36:49.314 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : flexiblas-3.0.4-9.el9.x86_64 52/102 2026-03-09T13:36:49.317 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-ply-3.11-14.el9.noarch 53/102 2026-03-09T13:36:49.321 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-repoze-lru-0.7-16.el9.noarch 54/102 2026-03-09T13:36:49.323 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-jaraco-8.2.1-3.el9.noarch 55/102 2026-03-09T13:36:49.326 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-more-itertools-8.12.0-2.el9.noarch 56/102 2026-03-09T13:36:49.329 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-toml-0.10.2-6.el9.noarch 57/102 2026-03-09T13:36:49.333 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-pytz-2021.1-5.el9.noarch 58/102 2026-03-09T13:36:49.343 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-backports-tarfile-1.2.0-1.el9.noarch 59/102 2026-03-09T13:36:49.349 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-devel-3.9.25-3.el9.x86_64 60/102 2026-03-09T13:36:49.351 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-jsonpointer-2.0-4.el9.noarch 61/102 2026-03-09T13:36:49.355 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-typing-extensions-4.15.0-1.el9.noarch 62/102 2026-03-09T13:36:49.358 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-idna-2.10-7.el9.1.noarch 63/102 2026-03-09T13:36:49.365 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-pysocks-1.7.1-12.el9.noarch 64/102 2026-03-09T13:36:49.370 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-pyasn1-0.4.8-7.el9.noarch 65/102 2026-03-09T13:36:49.376 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-logutils-0.3.5-21.el9.noarch 66/102 2026-03-09T13:36:49.380 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-webob-1.8.8-2.el9.noarch 67/102 2026-03-09T13:36:49.388 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-cachetools-4.2.4-1.el9.noarch 68/102 2026-03-09T13:36:49.391 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-chardet-4.0.0-5.el9.noarch 69/102 2026-03-09T13:36:49.395 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-autocommand-2.2.2-8.el9.noarch 70/102 2026-03-09T13:36:49.398 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-packaging-20.9-5.el9.noarch 71/102 2026-03-09T13:36:49.404 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : grpc-data-1.46.7-10.el9.noarch 72/102 2026-03-09T13:36:49.407 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-protobuf-3.14.0-17.el9.noarch 73/102 2026-03-09T13:36:49.411 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-zc-lockfile-2.0-10.el9.noarch 74/102 2026-03-09T13:36:49.421 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-natsort-7.1.1-5.el9.noarch 75/102 2026-03-09T13:36:49.427 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-oauthlib-3.1.1-5.el9.noarch 76/102 2026-03-09T13:36:49.431 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-websocket-client-1.2.3-2.el9.noarch 77/102 2026-03-09T13:36:49.434 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-certifi-2023.05.07-4.el9.noarch 78/102 2026-03-09T13:36:49.435 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 79/102 2026-03-09T13:36:49.442 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 80/102 2026-03-09T13:36:49.445 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-werkzeug-2.0.3-3.el9.1.noarch 81/102 2026-03-09T13:36:49.466 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-09T13:36:49.466 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-crash.service". 2026-03-09T13:36:49.466 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:36:49.474 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-09T13:36:49.497 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-09T13:36:49.497 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 83/102 2026-03-09T13:36:49.510 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 83/102 2026-03-09T13:36:49.515 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : qatzip-libs-1.3.1-1.el9.x86_64 84/102 2026-03-09T13:36:49.518 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 85/102 2026-03-09T13:36:49.519 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-prettytable-0.7.2-27.el9.noarch 86/102 2026-03-09T13:36:49.520 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 87/102 2026-03-09T13:36:54.630 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 87/102 2026-03-09T13:36:54.631 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /sys 2026-03-09T13:36:54.631 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /proc 2026-03-09T13:36:54.631 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /mnt 2026-03-09T13:36:54.631 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /var/tmp 2026-03-09T13:36:54.631 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /home 2026-03-09T13:36:54.631 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /root 2026-03-09T13:36:54.631 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /tmp 2026-03-09T13:36:54.631 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:36:54.640 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : qatlib-25.08.0-2.el9.x86_64 88/102 2026-03-09T13:36:54.655 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-09T13:36:54.655 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-09T13:36:54.662 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-09T13:36:54.664 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : gperftools-libs-2.9.1-3.el9.x86_64 90/102 2026-03-09T13:36:54.667 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libunwind-1.6.2-1.el9.x86_64 91/102 2026-03-09T13:36:54.669 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : pciutils-3.7.0-7.el9.x86_64 92/102 2026-03-09T13:36:54.671 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : liboath-2.6.12-1.el9.x86_64 93/102 2026-03-09T13:36:54.671 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 94/102 2026-03-09T13:36:54.683 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 94/102 2026-03-09T13:36:54.685 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ledmon-libs-1.1.0-3.el9.x86_64 95/102 2026-03-09T13:36:54.688 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libquadmath-11.5.0-14.el9.x86_64 96/102 2026-03-09T13:36:54.691 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-markupsafe-1.1.1-12.el9.x86_64 97/102 2026-03-09T13:36:54.693 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : protobuf-3.14.0-17.el9.x86_64 98/102 2026-03-09T13:36:54.700 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libconfig-1.7.2-9.el9.x86_64 99/102 2026-03-09T13:36:54.709 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : cryptsetup-2.8.1-3.el9.x86_64 100/102 2026-03-09T13:36:54.714 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : abseil-cpp-20211102.0-4.el9.x86_64 101/102 2026-03-09T13:36:54.714 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-09T13:36:54.812 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-09T13:36:54.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 1/102 2026-03-09T13:36:54.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-09T13:36:54.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/102 2026-03-09T13:36:54.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 4/102 2026-03-09T13:36:54.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/102 2026-03-09T13:36:54.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 6/102 2026-03-09T13:36:54.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-09T13:36:54.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 8/102 2026-03-09T13:36:54.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 9/102 2026-03-09T13:36:54.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 10/102 2026-03-09T13:36:54.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 11/102 2026-03-09T13:36:54.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-09T13:36:54.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 13/102 2026-03-09T13:36:54.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 14/102 2026-03-09T13:36:54.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 15/102 2026-03-09T13:36:54.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 16/102 2026-03-09T13:36:54.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 17/102 2026-03-09T13:36:54.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 18/102 2026-03-09T13:36:54.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 19/102 2026-03-09T13:36:54.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 20/102 2026-03-09T13:36:54.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 21/102 2026-03-09T13:36:54.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 22/102 2026-03-09T13:36:54.813 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 23/102 2026-03-09T13:36:54.813 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 24/102 2026-03-09T13:36:54.813 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 25/102 2026-03-09T13:36:54.813 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 26/102 2026-03-09T13:36:54.813 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 27/102 2026-03-09T13:36:54.813 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 28/102 2026-03-09T13:36:54.813 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 29/102 2026-03-09T13:36:54.813 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 30/102 2026-03-09T13:36:54.813 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 31/102 2026-03-09T13:36:54.813 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 32/102 2026-03-09T13:36:54.813 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 33/102 2026-03-09T13:36:54.813 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 34/102 2026-03-09T13:36:54.813 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 35/102 2026-03-09T13:36:54.813 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 36/102 2026-03-09T13:36:54.813 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 37/102 2026-03-09T13:36:54.813 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 38/102 2026-03-09T13:36:54.813 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 39/102 2026-03-09T13:36:54.813 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 40/102 2026-03-09T13:36:54.813 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 41/102 2026-03-09T13:36:54.813 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 42/102 2026-03-09T13:36:54.813 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 43/102 2026-03-09T13:36:54.813 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/102 2026-03-09T13:36:54.813 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-chardet-4.0.0-5.el9.noarch 45/102 2026-03-09T13:36:54.813 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 46/102 2026-03-09T13:36:54.813 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 47/102 2026-03-09T13:36:54.813 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 48/102 2026-03-09T13:36:54.813 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 49/102 2026-03-09T13:36:54.813 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 50/102 2026-03-09T13:36:54.813 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 51/102 2026-03-09T13:36:54.813 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 52/102 2026-03-09T13:36:54.813 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-idna-2.10-7.el9.1.noarch 53/102 2026-03-09T13:36:54.813 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 54/102 2026-03-09T13:36:54.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 55/102 2026-03-09T13:36:54.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 56/102 2026-03-09T13:36:54.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 57/102 2026-03-09T13:36:54.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 58/102 2026-03-09T13:36:54.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 59/102 2026-03-09T13:36:54.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 60/102 2026-03-09T13:36:54.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jsonpatch-1.21-16.el9.noarch 61/102 2026-03-09T13:36:54.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jsonpointer-2.0-4.el9.noarch 62/102 2026-03-09T13:36:54.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 63/102 2026-03-09T13:36:54.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 64/102 2026-03-09T13:36:54.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 65/102 2026-03-09T13:36:54.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 66/102 2026-03-09T13:36:54.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 67/102 2026-03-09T13:36:54.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 68/102 2026-03-09T13:36:54.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 69/102 2026-03-09T13:36:54.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 70/102 2026-03-09T13:36:54.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 71/102 2026-03-09T13:36:54.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-oauthlib-3.1.1-5.el9.noarch 72/102 2026-03-09T13:36:54.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 73/102 2026-03-09T13:36:54.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 74/102 2026-03-09T13:36:54.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-ply-3.11-14.el9.noarch 75/102 2026-03-09T13:36:54.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 76/102 2026-03-09T13:36:54.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-prettytable-0.7.2-27.el9.noarch 77/102 2026-03-09T13:36:54.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 78/102 2026-03-09T13:36:54.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 79/102 2026-03-09T13:36:54.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 80/102 2026-03-09T13:36:54.815 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 81/102 2026-03-09T13:36:54.815 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 82/102 2026-03-09T13:36:54.815 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pysocks-1.7.1-12.el9.noarch 83/102 2026-03-09T13:36:54.815 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pytz-2021.1-5.el9.noarch 84/102 2026-03-09T13:36:54.815 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 85/102 2026-03-09T13:36:54.815 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 86/102 2026-03-09T13:36:54.815 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 87/102 2026-03-09T13:36:54.815 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 88/102 2026-03-09T13:36:54.815 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 89/102 2026-03-09T13:36:54.815 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 90/102 2026-03-09T13:36:54.815 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 91/102 2026-03-09T13:36:54.815 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 92/102 2026-03-09T13:36:54.815 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 93/102 2026-03-09T13:36:54.815 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 94/102 2026-03-09T13:36:54.815 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 95/102 2026-03-09T13:36:54.815 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 96/102 2026-03-09T13:36:54.815 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 97/102 2026-03-09T13:36:54.815 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 98/102 2026-03-09T13:36:54.815 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 99/102 2026-03-09T13:36:54.815 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 100/102 2026-03-09T13:36:54.815 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 101/102 2026-03-09T13:36:54.895 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-09T13:36:54.895 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout:Removed: 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: python3-chardet-4.0.0-5.el9.noarch 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: python3-idna-2.10-7.el9.1.noarch 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-09T13:36:54.896 INFO:teuthology.orchestra.run.vm04.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-09T13:36:54.897 INFO:teuthology.orchestra.run.vm04.stdout: python3-jsonpatch-1.21-16.el9.noarch 2026-03-09T13:36:54.897 INFO:teuthology.orchestra.run.vm04.stdout: python3-jsonpointer-2.0-4.el9.noarch 2026-03-09T13:36:54.897 INFO:teuthology.orchestra.run.vm04.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-09T13:36:54.897 INFO:teuthology.orchestra.run.vm04.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-09T13:36:54.897 INFO:teuthology.orchestra.run.vm04.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-09T13:36:54.897 INFO:teuthology.orchestra.run.vm04.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-09T13:36:54.897 INFO:teuthology.orchestra.run.vm04.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-09T13:36:54.897 INFO:teuthology.orchestra.run.vm04.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-09T13:36:54.897 INFO:teuthology.orchestra.run.vm04.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-09T13:36:54.897 INFO:teuthology.orchestra.run.vm04.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-09T13:36:54.897 INFO:teuthology.orchestra.run.vm04.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-09T13:36:54.897 INFO:teuthology.orchestra.run.vm04.stdout: python3-oauthlib-3.1.1-5.el9.noarch 2026-03-09T13:36:54.897 INFO:teuthology.orchestra.run.vm04.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-09T13:36:54.897 INFO:teuthology.orchestra.run.vm04.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-09T13:36:54.897 INFO:teuthology.orchestra.run.vm04.stdout: python3-ply-3.11-14.el9.noarch 2026-03-09T13:36:54.897 INFO:teuthology.orchestra.run.vm04.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-09T13:36:54.897 INFO:teuthology.orchestra.run.vm04.stdout: python3-prettytable-0.7.2-27.el9.noarch 2026-03-09T13:36:54.897 INFO:teuthology.orchestra.run.vm04.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-09T13:36:54.897 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-09T13:36:54.897 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-09T13:36:54.897 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-09T13:36:54.897 INFO:teuthology.orchestra.run.vm04.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-09T13:36:54.897 INFO:teuthology.orchestra.run.vm04.stdout: python3-pysocks-1.7.1-12.el9.noarch 2026-03-09T13:36:54.897 INFO:teuthology.orchestra.run.vm04.stdout: python3-pytz-2021.1-5.el9.noarch 2026-03-09T13:36:54.897 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-09T13:36:54.897 INFO:teuthology.orchestra.run.vm04.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-09T13:36:54.897 INFO:teuthology.orchestra.run.vm04.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-09T13:36:54.897 INFO:teuthology.orchestra.run.vm04.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-09T13:36:54.897 INFO:teuthology.orchestra.run.vm04.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-09T13:36:54.897 INFO:teuthology.orchestra.run.vm04.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-09T13:36:54.897 INFO:teuthology.orchestra.run.vm04.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-09T13:36:54.897 INFO:teuthology.orchestra.run.vm04.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-09T13:36:54.897 INFO:teuthology.orchestra.run.vm04.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-09T13:36:54.897 INFO:teuthology.orchestra.run.vm04.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-09T13:36:54.897 INFO:teuthology.orchestra.run.vm04.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-09T13:36:54.897 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-09T13:36:54.897 INFO:teuthology.orchestra.run.vm04.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-09T13:36:54.897 INFO:teuthology.orchestra.run.vm04.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-09T13:36:54.897 INFO:teuthology.orchestra.run.vm04.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-09T13:36:54.897 INFO:teuthology.orchestra.run.vm04.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-09T13:36:54.897 INFO:teuthology.orchestra.run.vm04.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-09T13:36:54.897 INFO:teuthology.orchestra.run.vm04.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:36:54.897 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:36:54.897 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T13:36:55.103 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T13:36:55.103 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T13:36:55.103 INFO:teuthology.orchestra.run.vm04.stdout: Package Arch Version Repository Size 2026-03-09T13:36:55.103 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T13:36:55.103 INFO:teuthology.orchestra.run.vm04.stdout:Removing: 2026-03-09T13:36:55.103 INFO:teuthology.orchestra.run.vm04.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 775 k 2026-03-09T13:36:55.103 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:36:55.103 INFO:teuthology.orchestra.run.vm04.stdout:Transaction Summary 2026-03-09T13:36:55.103 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T13:36:55.103 INFO:teuthology.orchestra.run.vm04.stdout:Remove 1 Package 2026-03-09T13:36:55.103 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:36:55.104 INFO:teuthology.orchestra.run.vm04.stdout:Freed space: 775 k 2026-03-09T13:36:55.104 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction check 2026-03-09T13:36:55.105 INFO:teuthology.orchestra.run.vm04.stdout:Transaction check succeeded. 2026-03-09T13:36:55.105 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction test 2026-03-09T13:36:55.106 INFO:teuthology.orchestra.run.vm04.stdout:Transaction test succeeded. 2026-03-09T13:36:55.107 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction 2026-03-09T13:36:55.122 INFO:teuthology.orchestra.run.vm04.stdout: Preparing : 1/1 2026-03-09T13:36:55.123 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-09T13:36:55.224 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-09T13:36:55.270 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-09T13:36:55.270 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:36:55.270 INFO:teuthology.orchestra.run.vm04.stdout:Removed: 2026-03-09T13:36:55.270 INFO:teuthology.orchestra.run.vm04.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T13:36:55.270 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:36:55.270 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T13:36:55.437 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: ceph-immutable-object-cache 2026-03-09T13:36:55.437 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T13:36:55.441 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T13:36:55.441 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T13:36:55.441 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T13:36:55.602 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: ceph-mgr 2026-03-09T13:36:55.602 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T13:36:55.605 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T13:36:55.606 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T13:36:55.606 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T13:36:55.767 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: ceph-mgr-dashboard 2026-03-09T13:36:55.767 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T13:36:55.770 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T13:36:55.771 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T13:36:55.771 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T13:36:55.931 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: ceph-mgr-diskprediction-local 2026-03-09T13:36:55.931 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T13:36:55.935 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T13:36:55.935 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T13:36:55.935 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T13:36:56.095 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: ceph-mgr-rook 2026-03-09T13:36:56.095 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T13:36:56.098 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T13:36:56.098 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T13:36:56.099 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T13:36:56.260 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: ceph-mgr-cephadm 2026-03-09T13:36:56.260 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T13:36:56.263 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T13:36:56.264 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T13:36:56.264 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T13:36:56.431 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T13:36:56.431 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T13:36:56.431 INFO:teuthology.orchestra.run.vm04.stdout: Package Arch Version Repository Size 2026-03-09T13:36:56.431 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T13:36:56.431 INFO:teuthology.orchestra.run.vm04.stdout:Removing: 2026-03-09T13:36:56.431 INFO:teuthology.orchestra.run.vm04.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.6 M 2026-03-09T13:36:56.431 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:36:56.431 INFO:teuthology.orchestra.run.vm04.stdout:Transaction Summary 2026-03-09T13:36:56.431 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T13:36:56.431 INFO:teuthology.orchestra.run.vm04.stdout:Remove 1 Package 2026-03-09T13:36:56.431 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:36:56.431 INFO:teuthology.orchestra.run.vm04.stdout:Freed space: 3.6 M 2026-03-09T13:36:56.431 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction check 2026-03-09T13:36:56.433 INFO:teuthology.orchestra.run.vm04.stdout:Transaction check succeeded. 2026-03-09T13:36:56.433 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction test 2026-03-09T13:36:56.442 INFO:teuthology.orchestra.run.vm04.stdout:Transaction test succeeded. 2026-03-09T13:36:56.442 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction 2026-03-09T13:36:56.467 INFO:teuthology.orchestra.run.vm04.stdout: Preparing : 1/1 2026-03-09T13:36:56.481 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-09T13:36:56.555 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-09T13:36:56.601 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-09T13:36:56.601 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:36:56.601 INFO:teuthology.orchestra.run.vm04.stdout:Removed: 2026-03-09T13:36:56.601 INFO:teuthology.orchestra.run.vm04.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:36:56.601 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:36:56.601 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T13:36:56.769 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: ceph-volume 2026-03-09T13:36:56.769 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T13:36:56.772 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T13:36:56.772 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T13:36:56.772 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T13:36:56.936 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T13:36:56.937 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T13:36:56.937 INFO:teuthology.orchestra.run.vm04.stdout: Package Arch Version Repo Size 2026-03-09T13:36:56.937 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T13:36:56.937 INFO:teuthology.orchestra.run.vm04.stdout:Removing: 2026-03-09T13:36:56.937 INFO:teuthology.orchestra.run.vm04.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 456 k 2026-03-09T13:36:56.937 INFO:teuthology.orchestra.run.vm04.stdout:Removing dependent packages: 2026-03-09T13:36:56.937 INFO:teuthology.orchestra.run.vm04.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 153 k 2026-03-09T13:36:56.937 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:36:56.937 INFO:teuthology.orchestra.run.vm04.stdout:Transaction Summary 2026-03-09T13:36:56.937 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T13:36:56.937 INFO:teuthology.orchestra.run.vm04.stdout:Remove 2 Packages 2026-03-09T13:36:56.937 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:36:56.937 INFO:teuthology.orchestra.run.vm04.stdout:Freed space: 610 k 2026-03-09T13:36:56.937 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction check 2026-03-09T13:36:56.939 INFO:teuthology.orchestra.run.vm04.stdout:Transaction check succeeded. 2026-03-09T13:36:56.939 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction test 2026-03-09T13:36:56.949 INFO:teuthology.orchestra.run.vm04.stdout:Transaction test succeeded. 2026-03-09T13:36:56.949 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction 2026-03-09T13:36:56.973 INFO:teuthology.orchestra.run.vm04.stdout: Preparing : 1/1 2026-03-09T13:36:56.975 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-09T13:36:56.988 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-09T13:36:57.041 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-09T13:36:57.041 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-09T13:36:57.086 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-09T13:36:57.086 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:36:57.087 INFO:teuthology.orchestra.run.vm04.stdout:Removed: 2026-03-09T13:36:57.087 INFO:teuthology.orchestra.run.vm04.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:36:57.087 INFO:teuthology.orchestra.run.vm04.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:36:57.087 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:36:57.087 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T13:36:57.258 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T13:36:57.258 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T13:36:57.258 INFO:teuthology.orchestra.run.vm04.stdout: Package Arch Version Repo Size 2026-03-09T13:36:57.258 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T13:36:57.258 INFO:teuthology.orchestra.run.vm04.stdout:Removing: 2026-03-09T13:36:57.258 INFO:teuthology.orchestra.run.vm04.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.0 M 2026-03-09T13:36:57.258 INFO:teuthology.orchestra.run.vm04.stdout:Removing dependent packages: 2026-03-09T13:36:57.258 INFO:teuthology.orchestra.run.vm04.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 514 k 2026-03-09T13:36:57.258 INFO:teuthology.orchestra.run.vm04.stdout:Removing unused dependencies: 2026-03-09T13:36:57.258 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 187 k 2026-03-09T13:36:57.258 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:36:57.258 INFO:teuthology.orchestra.run.vm04.stdout:Transaction Summary 2026-03-09T13:36:57.258 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T13:36:57.258 INFO:teuthology.orchestra.run.vm04.stdout:Remove 3 Packages 2026-03-09T13:36:57.258 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:36:57.258 INFO:teuthology.orchestra.run.vm04.stdout:Freed space: 3.7 M 2026-03-09T13:36:57.258 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction check 2026-03-09T13:36:57.260 INFO:teuthology.orchestra.run.vm04.stdout:Transaction check succeeded. 2026-03-09T13:36:57.260 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction test 2026-03-09T13:36:57.275 INFO:teuthology.orchestra.run.vm04.stdout:Transaction test succeeded. 2026-03-09T13:36:57.276 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction 2026-03-09T13:36:57.306 INFO:teuthology.orchestra.run.vm04.stdout: Preparing : 1/1 2026-03-09T13:36:57.307 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-09T13:36:57.308 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-09T13:36:57.309 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-09T13:36:57.368 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-09T13:36:57.368 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-09T13:36:57.368 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-09T13:36:57.407 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-09T13:36:57.407 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:36:57.407 INFO:teuthology.orchestra.run.vm04.stdout:Removed: 2026-03-09T13:36:57.407 INFO:teuthology.orchestra.run.vm04.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:36:57.407 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:36:57.407 INFO:teuthology.orchestra.run.vm04.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:36:57.407 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:36:57.407 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T13:36:57.558 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: libcephfs-devel 2026-03-09T13:36:57.558 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T13:36:57.561 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T13:36:57.562 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T13:36:57.562 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T13:36:57.720 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T13:36:57.721 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T13:36:57.721 INFO:teuthology.orchestra.run.vm04.stdout: Package Arch Version Repository Size 2026-03-09T13:36:57.721 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T13:36:57.721 INFO:teuthology.orchestra.run.vm04.stdout:Removing: 2026-03-09T13:36:57.721 INFO:teuthology.orchestra.run.vm04.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 12 M 2026-03-09T13:36:57.721 INFO:teuthology.orchestra.run.vm04.stdout:Removing dependent packages: 2026-03-09T13:36:57.721 INFO:teuthology.orchestra.run.vm04.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-09T13:36:57.721 INFO:teuthology.orchestra.run.vm04.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-09T13:36:57.721 INFO:teuthology.orchestra.run.vm04.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 265 k 2026-03-09T13:36:57.722 INFO:teuthology.orchestra.run.vm04.stdout: qemu-kvm-block-rbd x86_64 17:10.1.0-15.el9 @appstream 37 k 2026-03-09T13:36:57.722 INFO:teuthology.orchestra.run.vm04.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 227 k 2026-03-09T13:36:57.722 INFO:teuthology.orchestra.run.vm04.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 490 k 2026-03-09T13:36:57.722 INFO:teuthology.orchestra.run.vm04.stdout:Removing unused dependencies: 2026-03-09T13:36:57.722 INFO:teuthology.orchestra.run.vm04.stdout: boost-program-options x86_64 1.75.0-13.el9 @appstream 276 k 2026-03-09T13:36:57.722 INFO:teuthology.orchestra.run.vm04.stdout: libarrow x86_64 9.0.0-15.el9 @epel 18 M 2026-03-09T13:36:57.722 INFO:teuthology.orchestra.run.vm04.stdout: libarrow-doc noarch 9.0.0-15.el9 @epel 122 k 2026-03-09T13:36:57.722 INFO:teuthology.orchestra.run.vm04.stdout: libnbd x86_64 1.20.3-4.el9 @appstream 453 k 2026-03-09T13:36:57.722 INFO:teuthology.orchestra.run.vm04.stdout: libpmemobj x86_64 1.12.1-1.el9 @appstream 383 k 2026-03-09T13:36:57.722 INFO:teuthology.orchestra.run.vm04.stdout: librabbitmq x86_64 0.11.0-7.el9 @appstream 102 k 2026-03-09T13:36:57.722 INFO:teuthology.orchestra.run.vm04.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-09T13:36:57.722 INFO:teuthology.orchestra.run.vm04.stdout: librdkafka x86_64 1.6.1-102.el9 @appstream 2.0 M 2026-03-09T13:36:57.722 INFO:teuthology.orchestra.run.vm04.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 19 M 2026-03-09T13:36:57.722 INFO:teuthology.orchestra.run.vm04.stdout: lttng-ust x86_64 2.12.0-6.el9 @appstream 1.0 M 2026-03-09T13:36:57.722 INFO:teuthology.orchestra.run.vm04.stdout: parquet-libs x86_64 9.0.0-15.el9 @epel 2.8 M 2026-03-09T13:36:57.722 INFO:teuthology.orchestra.run.vm04.stdout: re2 x86_64 1:20211101-20.el9 @epel 472 k 2026-03-09T13:36:57.722 INFO:teuthology.orchestra.run.vm04.stdout: thrift x86_64 0.15.0-4.el9 @epel 4.8 M 2026-03-09T13:36:57.722 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:36:57.722 INFO:teuthology.orchestra.run.vm04.stdout:Transaction Summary 2026-03-09T13:36:57.722 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T13:36:57.722 INFO:teuthology.orchestra.run.vm04.stdout:Remove 20 Packages 2026-03-09T13:36:57.722 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:36:57.722 INFO:teuthology.orchestra.run.vm04.stdout:Freed space: 79 M 2026-03-09T13:36:57.722 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction check 2026-03-09T13:36:57.725 INFO:teuthology.orchestra.run.vm04.stdout:Transaction check succeeded. 2026-03-09T13:36:57.725 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction test 2026-03-09T13:36:57.746 INFO:teuthology.orchestra.run.vm04.stdout:Transaction test succeeded. 2026-03-09T13:36:57.746 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction 2026-03-09T13:36:57.787 INFO:teuthology.orchestra.run.vm04.stdout: Preparing : 1/1 2026-03-09T13:36:57.789 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 1/20 2026-03-09T13:36:57.791 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2/20 2026-03-09T13:36:57.793 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 3/20 2026-03-09T13:36:57.793 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-09T13:36:57.804 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-09T13:36:57.806 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : parquet-libs-9.0.0-15.el9.x86_64 5/20 2026-03-09T13:36:57.807 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 6/20 2026-03-09T13:36:57.809 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-09T13:36:57.810 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 8/20 2026-03-09T13:36:57.812 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libarrow-doc-9.0.0-15.el9.noarch 9/20 2026-03-09T13:36:57.812 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-09T13:36:57.824 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-09T13:36:57.824 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-09T13:36:57.824 INFO:teuthology.orchestra.run.vm04.stdout:warning: file /etc/ceph: remove failed: No such file or directory 2026-03-09T13:36:57.824 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:36:57.835 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-09T13:36:57.837 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libarrow-9.0.0-15.el9.x86_64 12/20 2026-03-09T13:36:57.841 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : re2-1:20211101-20.el9.x86_64 13/20 2026-03-09T13:36:57.844 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : lttng-ust-2.12.0-6.el9.x86_64 14/20 2026-03-09T13:36:57.846 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : thrift-0.15.0-4.el9.x86_64 15/20 2026-03-09T13:36:57.849 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libnbd-1.20.3-4.el9.x86_64 16/20 2026-03-09T13:36:57.851 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libpmemobj-1.12.1-1.el9.x86_64 17/20 2026-03-09T13:36:57.852 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : boost-program-options-1.75.0-13.el9.x86_64 18/20 2026-03-09T13:36:57.854 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : librabbitmq-0.11.0-7.el9.x86_64 19/20 2026-03-09T13:36:57.867 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-09T13:36:57.924 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-09T13:36:57.924 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 1/20 2026-03-09T13:36:57.924 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 2/20 2026-03-09T13:36:57.924 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 3/20 2026-03-09T13:36:57.924 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 4/20 2026-03-09T13:36:57.924 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 5/20 2026-03-09T13:36:57.924 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 6/20 2026-03-09T13:36:57.924 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-09T13:36:57.924 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 8/20 2026-03-09T13:36:57.924 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 9/20 2026-03-09T13:36:57.924 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-09T13:36:57.924 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 11/20 2026-03-09T13:36:57.924 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 12/20 2026-03-09T13:36:57.925 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 13/20 2026-03-09T13:36:57.925 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 14/20 2026-03-09T13:36:57.925 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 15/20 2026-03-09T13:36:57.925 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 16/20 2026-03-09T13:36:57.925 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 17/20 2026-03-09T13:36:57.925 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 18/20 2026-03-09T13:36:57.925 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : re2-1:20211101-20.el9.x86_64 19/20 2026-03-09T13:36:57.970 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 20/20 2026-03-09T13:36:57.970 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:36:57.970 INFO:teuthology.orchestra.run.vm04.stdout:Removed: 2026-03-09T13:36:57.970 INFO:teuthology.orchestra.run.vm04.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-09T13:36:57.970 INFO:teuthology.orchestra.run.vm04.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-09T13:36:57.970 INFO:teuthology.orchestra.run.vm04.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-09T13:36:57.970 INFO:teuthology.orchestra.run.vm04.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-09T13:36:57.970 INFO:teuthology.orchestra.run.vm04.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-09T13:36:57.970 INFO:teuthology.orchestra.run.vm04.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-09T13:36:57.970 INFO:teuthology.orchestra.run.vm04.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:36:57.970 INFO:teuthology.orchestra.run.vm04.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:36:57.970 INFO:teuthology.orchestra.run.vm04.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-09T13:36:57.970 INFO:teuthology.orchestra.run.vm04.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:36:57.970 INFO:teuthology.orchestra.run.vm04.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-09T13:36:57.970 INFO:teuthology.orchestra.run.vm04.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-09T13:36:57.970 INFO:teuthology.orchestra.run.vm04.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:36:57.970 INFO:teuthology.orchestra.run.vm04.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:36:57.970 INFO:teuthology.orchestra.run.vm04.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:36:57.970 INFO:teuthology.orchestra.run.vm04.stdout: qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 2026-03-09T13:36:57.970 INFO:teuthology.orchestra.run.vm04.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:36:57.970 INFO:teuthology.orchestra.run.vm04.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:36:57.970 INFO:teuthology.orchestra.run.vm04.stdout: re2-1:20211101-20.el9.x86_64 2026-03-09T13:36:57.970 INFO:teuthology.orchestra.run.vm04.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-09T13:36:57.970 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:36:57.970 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T13:36:58.161 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: librbd1 2026-03-09T13:36:58.162 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T13:36:58.163 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T13:36:58.164 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T13:36:58.164 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T13:36:58.330 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: python3-rados 2026-03-09T13:36:58.330 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T13:36:58.331 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T13:36:58.332 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T13:36:58.332 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T13:36:58.487 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: python3-rgw 2026-03-09T13:36:58.487 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T13:36:58.489 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T13:36:58.489 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T13:36:58.489 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T13:36:58.645 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: python3-cephfs 2026-03-09T13:36:58.645 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T13:36:58.647 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T13:36:58.648 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T13:36:58.648 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T13:36:58.803 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: python3-rbd 2026-03-09T13:36:58.804 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T13:36:58.805 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T13:36:58.806 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T13:36:58.806 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T13:36:58.964 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: rbd-fuse 2026-03-09T13:36:58.964 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T13:36:58.966 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T13:36:58.966 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T13:36:58.966 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T13:36:59.122 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: rbd-mirror 2026-03-09T13:36:59.123 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T13:36:59.124 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T13:36:59.125 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T13:36:59.125 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T13:36:59.283 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: rbd-nbd 2026-03-09T13:36:59.284 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T13:36:59.285 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T13:36:59.286 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T13:36:59.286 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T13:36:59.307 DEBUG:teuthology.orchestra.run.vm04:> sudo yum clean all 2026-03-09T13:36:59.420 INFO:teuthology.orchestra.run.vm04.stdout:56 files removed 2026-03-09T13:36:59.439 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-09T13:36:59.462 DEBUG:teuthology.orchestra.run.vm04:> sudo yum clean expire-cache 2026-03-09T13:36:59.610 INFO:teuthology.orchestra.run.vm04.stdout:Cache was expired 2026-03-09T13:36:59.611 INFO:teuthology.orchestra.run.vm04.stdout:0 files removed 2026-03-09T13:36:59.629 DEBUG:teuthology.parallel:result is None 2026-03-09T13:36:59.629 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm04.local 2026-03-09T13:36:59.629 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-09T13:36:59.652 DEBUG:teuthology.orchestra.run.vm04:> sudo mv -f /etc/yum/pluginconf.d/priorities.conf.orig /etc/yum/pluginconf.d/priorities.conf 2026-03-09T13:36:59.717 DEBUG:teuthology.parallel:result is None 2026-03-09T13:36:59.717 DEBUG:teuthology.run_tasks:Unwinding manager clock 2026-03-09T13:36:59.719 INFO:teuthology.task.clock:Checking final clock skew... 2026-03-09T13:36:59.720 DEBUG:teuthology.orchestra.run.vm04:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T13:36:59.771 INFO:teuthology.orchestra.run.vm04.stderr:bash: line 1: ntpq: command not found 2026-03-09T13:36:59.788 INFO:teuthology.orchestra.run.vm04.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-09T13:36:59.788 INFO:teuthology.orchestra.run.vm04.stdout:=============================================================================== 2026-03-09T13:36:59.788 INFO:teuthology.orchestra.run.vm04.stdout:^* srv01.spectre-net.de 2 6 377 43 -2021us[-2012us] +/- 14ms 2026-03-09T13:36:59.788 INFO:teuthology.orchestra.run.vm04.stdout:^+ basilisk.mybb.de 2 6 377 42 +1886us[+1886us] +/- 19ms 2026-03-09T13:36:59.788 INFO:teuthology.orchestra.run.vm04.stdout:^+ pve2.h4x-gamers.top 2 6 377 42 -132us[ -132us] +/- 39ms 2026-03-09T13:36:59.788 INFO:teuthology.orchestra.run.vm04.stdout:^+ cluster015.linocomm.net 2 6 377 42 +1065us[+1065us] +/- 21ms 2026-03-09T13:36:59.789 DEBUG:teuthology.run_tasks:Unwinding manager ansible.cephlab 2026-03-09T13:36:59.792 INFO:teuthology.task.ansible:Skipping ansible cleanup... 2026-03-09T13:36:59.792 DEBUG:teuthology.run_tasks:Unwinding manager selinux 2026-03-09T13:36:59.794 DEBUG:teuthology.run_tasks:Unwinding manager pcp 2026-03-09T13:36:59.796 DEBUG:teuthology.run_tasks:Unwinding manager internal.timer 2026-03-09T13:36:59.797 INFO:teuthology.task.internal:Duration was 531.378817 seconds 2026-03-09T13:36:59.798 DEBUG:teuthology.run_tasks:Unwinding manager internal.syslog 2026-03-09T13:36:59.800 INFO:teuthology.task.internal.syslog:Shutting down syslog monitoring... 2026-03-09T13:36:59.800 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-09T13:36:59.870 INFO:teuthology.orchestra.run.vm04.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-09T13:37:00.167 INFO:teuthology.task.internal.syslog:Checking logs for errors... 2026-03-09T13:37:00.167 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm04.local 2026-03-09T13:37:00.167 DEBUG:teuthology.orchestra.run.vm04:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-09T13:37:00.231 INFO:teuthology.task.internal.syslog:Gathering journactl... 2026-03-09T13:37:00.231 DEBUG:teuthology.orchestra.run.vm04:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T13:37:00.797 INFO:teuthology.task.internal.syslog:Compressing syslogs... 2026-03-09T13:37:00.797 DEBUG:teuthology.orchestra.run.vm04:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T13:37:00.822 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T13:37:00.823 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T13:37:00.823 INFO:teuthology.orchestra.run.vm04.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T13:37:00.823 INFO:teuthology.orchestra.run.vm04.stderr: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-09T13:37:00.823 INFO:teuthology.orchestra.run.vm04.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-09T13:37:00.975 INFO:teuthology.orchestra.run.vm04.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 97.2% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-09T13:37:00.978 DEBUG:teuthology.run_tasks:Unwinding manager internal.sudo 2026-03-09T13:37:00.980 INFO:teuthology.task.internal:Restoring /etc/sudoers... 2026-03-09T13:37:00.980 DEBUG:teuthology.orchestra.run.vm04:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-09T13:37:01.042 DEBUG:teuthology.run_tasks:Unwinding manager internal.coredump 2026-03-09T13:37:01.044 DEBUG:teuthology.orchestra.run.vm04:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-09T13:37:01.106 INFO:teuthology.orchestra.run.vm04.stdout:kernel.core_pattern = core 2026-03-09T13:37:01.118 DEBUG:teuthology.orchestra.run.vm04:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-09T13:37:01.171 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T13:37:01.172 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive 2026-03-09T13:37:01.174 INFO:teuthology.task.internal:Transferring archived files... 2026-03-09T13:37:01.174 DEBUG:teuthology.misc:Transferring archived files from vm04:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/492/remote/vm04 2026-03-09T13:37:01.174 DEBUG:teuthology.orchestra.run.vm04:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-09T13:37:01.241 INFO:teuthology.task.internal:Removing archive directory... 2026-03-09T13:37:01.241 DEBUG:teuthology.orchestra.run.vm04:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-09T13:37:01.293 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive_upload 2026-03-09T13:37:01.296 INFO:teuthology.task.internal:Not uploading archives. 2026-03-09T13:37:01.296 DEBUG:teuthology.run_tasks:Unwinding manager internal.base 2026-03-09T13:37:01.298 INFO:teuthology.task.internal:Tidying up after the test... 2026-03-09T13:37:01.298 DEBUG:teuthology.orchestra.run.vm04:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-09T13:37:01.349 INFO:teuthology.orchestra.run.vm04.stdout: 8532145 0 drwxr-xr-x 3 ubuntu ubuntu 19 Mar 9 13:37 /home/ubuntu/cephtest 2026-03-09T13:37:01.349 INFO:teuthology.orchestra.run.vm04.stdout: 54731388 0 drwxr-xr-x 3 ubuntu ubuntu 22 Mar 9 13:33 /home/ubuntu/cephtest/mnt.0 2026-03-09T13:37:01.349 INFO:teuthology.orchestra.run.vm04.stdout: 59076101 0 drwxr-xr-x 3 ubuntu ubuntu 17 Mar 9 13:34 /home/ubuntu/cephtest/mnt.0/client.0 2026-03-09T13:37:01.349 INFO:teuthology.orchestra.run.vm04.stdout: 83989378 0 drwxr-xr-x 2 ubuntu ubuntu 6 Mar 9 13:34 /home/ubuntu/cephtest/mnt.0/client.0/tmp 2026-03-09T13:37:01.350 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T13:37:01.350 INFO:teuthology.orchestra.run.vm04.stderr:rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty 2026-03-09T13:37:01.350 ERROR:teuthology.run_tasks:Manager failed: internal.base Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/teuthology/teuthology/task/internal/__init__.py", line 53, in base run.wait( File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 485, in wait proc.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm04 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest' 2026-03-09T13:37:01.351 DEBUG:teuthology.run_tasks:Unwinding manager console_log 2026-03-09T13:37:01.354 DEBUG:teuthology.run_tasks:Exception was not quenched, exiting: CommandFailedError: Command failed on vm04 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest' 2026-03-09T13:37:01.355 INFO:teuthology.run:Summary data: description: orch/cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_iscsi_container/{centos_9.stream test_iscsi_container}} duration: 531.3788168430328 failure_reason: 'Command failed (workunit test cephadm/test_iscsi_pids_limit.sh) on vm04 with status 125: ''mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=569c3e99c9b32a51b4eaf08731c728f4513ed589 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_iscsi_pids_limit.sh''' flavor: default owner: kyr sentry_event: null status: fail success: false 2026-03-09T13:37:01.355 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-09T13:37:01.380 INFO:teuthology.run:FAIL