2026-03-06T13:30:33.742 INFO:root:teuthology version: 1.2.4.dev6+g1c580df7a 2026-03-06T13:30:33.747 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-06T13:30:33.766 INFO:teuthology.run:Config: archive_path: /archive/irq0-2026-03-06_13:20:18-orch:cephadm:workunits-cobaltcore-storage-v19.2.3-fasttrack-3-none-default-vps/271 branch: cobaltcore-storage-v19.2.3-fasttrack-3 description: orch:cephadm:workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_iscsi_container/{centos_9.stream test_iscsi_container}} email: null first_in_suite: false flavor: default job_id: '271' last_in_suite: false machine_type: vps name: irq0-2026-03-06_13:20:18-orch:cephadm:workunits-cobaltcore-storage-v19.2.3-fasttrack-3-none-default-vps no_nested_subset: false os_type: centos os_version: 9.stream overrides: admin_socket: branch: cobaltcore-storage-v19.2.3-fasttrack-3 ansible.cephlab: branch: main repo: https://github.com/kshtsk/ceph-cm-ansible.git skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: timezone: Europe/Berlin ceph: conf: global: mon election default strategy: 1 mgr: debug mgr: 20 debug ms: 1 mgr/cephadm/use_agent: false mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: debug ms: 1 debug osd: 20 osd mclock iops capacity threshold hdd: 49000 flavor: default log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) - CEPHADM_FAILED_DAEMON log-only-match: - CEPHADM_ sha1: c24117fd5525679b799527bc1bd1f1dd0a2db5e2 ceph-deploy: conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: {} cephadm: cephadm_binary_url: https://download.ceph.com/rpm-19.2.3/el9/noarch/cephadm containers: image: harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3 install: ceph: flavor: default sha1: c24117fd5525679b799527bc1bd1f1dd0a2db5e2 extra_system_packages: deb: - python3-xmltodict - s3cmd rpm: - bzip2 - perl-Test-Harness - python3-xmltodict - s3cmd repos: - name: ceph-source priority: 1 url: https://s3.clyso.com/ces-packages/components/ceph/rpm-19.2.3-47-gc24117fd552/el9.clyso/SRPMS - name: ceph-noarch priority: 1 url: https://s3.clyso.com/ces-packages/components/ceph/rpm-19.2.3-47-gc24117fd552/el9.clyso/noarch - name: ceph priority: 1 url: https://s3.clyso.com/ces-packages/components/ceph/rpm-19.2.3-47-gc24117fd552/el9.clyso/x86_64 selinux: allowlist: - scontext=system_u:system_r:logrotate_t:s0 - scontext=system_u:system_r:logrotate_t:s0 - scontext=system_u:system_r:getty_t:s0 workunit: branch: tt-19.2.3-fasttrack-3-no-nvme-loop sha1: 5726a36c3452e5b72190cfceba828abc62c819b7 owner: irq0 priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - host.a - osd.0 - osd.1 - osd.2 - mon.a - mgr.a - client.0 seed: 6609 sha1: c24117fd5525679b799527bc1bd1f1dd0a2db5e2 sleep_before_teardown: 0 subset: 1/64 suite: orch:cephadm:workunits suite_branch: tt-19.2.3-fasttrack-3-no-nvme-loop suite_path: /home/teuthos/src/github.com_kshtsk_ceph_5726a36c3452e5b72190cfceba828abc62c819b7/qa suite_relpath: qa suite_repo: https://github.com/kshtsk/ceph.git suite_sha1: 5726a36c3452e5b72190cfceba828abc62c819b7 targets: vm03.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBdmRrpXmYGrbBRbxXrJyASqxkxu0rsJmXZhxk5GV6MaspfovAUstOzzX3GmgUwc1Gqv7VQzuLT/Ku7oaXM7vCY= tasks: - pexec: all: - sudo dnf remove nvme-cli -y - sudo dnf install runc nvmetcli nvme-cli -y - sudo sed -i 's/^#runtime = "crun"/runtime = "runc"/g' /usr/share/containers/containers.conf - sudo sed -i 's/runtime = "crun"/#runtime = "crun"/g' /usr/share/containers/containers.conf - install: null - cephadm: null - cephadm.shell: host.a: - ceph osd pool create foo - rbd pool init foo - ceph orch apply iscsi foo u p - workunit: clients: client.0: - cephadm/test_iscsi_pids_limit.sh - cephadm/test_iscsi_etc_hosts.sh - cephadm/test_iscsi_setup.sh teuthology: fragments_dropped: [] meta: {} postmerge: [] teuthology_branch: clyso-debian-13 teuthology_repo: https://github.com/clyso/teuthology teuthology_sha1: 1c580df7a9c7c2aadc272da296344fd99f27c444 timestamp: 2026-03-06_13:20:18 tube: vps user: irq0 verbose: false worker_log: /home/teuthos/.teuthology/dispatcher/dispatcher.vps.43333 2026-03-06T13:30:33.766 INFO:teuthology.run:suite_path is set to /home/teuthos/src/github.com_kshtsk_ceph_5726a36c3452e5b72190cfceba828abc62c819b7/qa; will attempt to use it 2026-03-06T13:30:33.766 INFO:teuthology.run:Found tasks at /home/teuthos/src/github.com_kshtsk_ceph_5726a36c3452e5b72190cfceba828abc62c819b7/qa/tasks 2026-03-06T13:30:33.766 INFO:teuthology.run_tasks:Running task internal.save_config... 2026-03-06T13:30:33.766 INFO:teuthology.task.internal:Saving configuration 2026-03-06T13:30:33.771 INFO:teuthology.run_tasks:Running task internal.check_lock... 2026-03-06T13:30:33.771 INFO:teuthology.task.internal.check_lock:Checking locks... 2026-03-06T13:30:33.776 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm03.local', 'description': '/archive/irq0-2026-03-06_13:20:18-orch:cephadm:workunits-cobaltcore-storage-v19.2.3-fasttrack-3-none-default-vps/271', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'centos', 'os_version': '9.stream', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-06 12:29:51.406938', 'locked_by': 'irq0', 'mac_address': '52:55:00:00:00:03', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBdmRrpXmYGrbBRbxXrJyASqxkxu0rsJmXZhxk5GV6MaspfovAUstOzzX3GmgUwc1Gqv7VQzuLT/Ku7oaXM7vCY='} 2026-03-06T13:30:33.776 INFO:teuthology.run_tasks:Running task internal.add_remotes... 2026-03-06T13:30:33.777 INFO:teuthology.task.internal:roles: ubuntu@vm03.local - ['host.a', 'osd.0', 'osd.1', 'osd.2', 'mon.a', 'mgr.a', 'client.0'] 2026-03-06T13:30:33.777 INFO:teuthology.run_tasks:Running task console_log... 2026-03-06T13:30:33.782 DEBUG:teuthology.task.console_log:vm03 does not support IPMI; excluding 2026-03-06T13:30:33.782 DEBUG:teuthology.exit:Installing handler: Handler(exiter=, func=.kill_console_loggers at 0x7f6496cc3e20>, signals=[15]) 2026-03-06T13:30:33.782 INFO:teuthology.run_tasks:Running task internal.connect... 2026-03-06T13:30:33.782 INFO:teuthology.task.internal:Opening connections... 2026-03-06T13:30:33.782 DEBUG:teuthology.task.internal:connecting to ubuntu@vm03.local 2026-03-06T13:30:33.783 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm03.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-06T13:30:33.844 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-03-06T13:30:33.845 DEBUG:teuthology.orchestra.run.vm03:> uname -m 2026-03-06T13:30:34.010 INFO:teuthology.orchestra.run.vm03.stdout:x86_64 2026-03-06T13:30:34.011 DEBUG:teuthology.orchestra.run.vm03:> cat /etc/os-release 2026-03-06T13:30:34.068 INFO:teuthology.orchestra.run.vm03.stdout:NAME="CentOS Stream" 2026-03-06T13:30:34.068 INFO:teuthology.orchestra.run.vm03.stdout:VERSION="9" 2026-03-06T13:30:34.068 INFO:teuthology.orchestra.run.vm03.stdout:ID="centos" 2026-03-06T13:30:34.068 INFO:teuthology.orchestra.run.vm03.stdout:ID_LIKE="rhel fedora" 2026-03-06T13:30:34.068 INFO:teuthology.orchestra.run.vm03.stdout:VERSION_ID="9" 2026-03-06T13:30:34.068 INFO:teuthology.orchestra.run.vm03.stdout:PLATFORM_ID="platform:el9" 2026-03-06T13:30:34.068 INFO:teuthology.orchestra.run.vm03.stdout:PRETTY_NAME="CentOS Stream 9" 2026-03-06T13:30:34.068 INFO:teuthology.orchestra.run.vm03.stdout:ANSI_COLOR="0;31" 2026-03-06T13:30:34.068 INFO:teuthology.orchestra.run.vm03.stdout:LOGO="fedora-logo-icon" 2026-03-06T13:30:34.068 INFO:teuthology.orchestra.run.vm03.stdout:CPE_NAME="cpe:/o:centos:centos:9" 2026-03-06T13:30:34.068 INFO:teuthology.orchestra.run.vm03.stdout:HOME_URL="https://centos.org/" 2026-03-06T13:30:34.068 INFO:teuthology.orchestra.run.vm03.stdout:BUG_REPORT_URL="https://issues.redhat.com/" 2026-03-06T13:30:34.068 INFO:teuthology.orchestra.run.vm03.stdout:REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9" 2026-03-06T13:30:34.068 INFO:teuthology.orchestra.run.vm03.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream" 2026-03-06T13:30:34.068 INFO:teuthology.lock.ops:Updating vm03.local on lock server 2026-03-06T13:30:34.074 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-03-06T13:30:34.076 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-03-06T13:30:34.077 INFO:teuthology.task.internal:Checking for old test directory... 2026-03-06T13:30:34.077 DEBUG:teuthology.orchestra.run.vm03:> test '!' -e /home/ubuntu/cephtest 2026-03-06T13:30:34.124 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-03-06T13:30:34.125 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-03-06T13:30:34.125 DEBUG:teuthology.orchestra.run.vm03:> test -z $(ls -A /var/lib/ceph) 2026-03-06T13:30:34.180 INFO:teuthology.orchestra.run.vm03.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-06T13:30:34.180 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-03-06T13:30:34.188 DEBUG:teuthology.orchestra.run.vm03:> test -e /ceph-qa-ready 2026-03-06T13:30:34.235 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-06T13:30:34.425 INFO:teuthology.run_tasks:Running task internal.base... 2026-03-06T13:30:34.426 INFO:teuthology.task.internal:Creating test directory... 2026-03-06T13:30:34.426 DEBUG:teuthology.orchestra.run.vm03:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-06T13:30:34.445 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-03-06T13:30:34.446 INFO:teuthology.run_tasks:Running task internal.archive... 2026-03-06T13:30:34.447 INFO:teuthology.task.internal:Creating archive directory... 2026-03-06T13:30:34.447 DEBUG:teuthology.orchestra.run.vm03:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-06T13:30:34.503 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-03-06T13:30:34.504 INFO:teuthology.task.internal:Enabling coredump saving... 2026-03-06T13:30:34.504 DEBUG:teuthology.orchestra.run.vm03:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-06T13:30:34.558 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-06T13:30:34.559 DEBUG:teuthology.orchestra.run.vm03:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-06T13:30:34.628 INFO:teuthology.orchestra.run.vm03.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-06T13:30:34.640 INFO:teuthology.orchestra.run.vm03.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-06T13:30:34.641 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-03-06T13:30:34.645 INFO:teuthology.task.internal:Configuring sudo... 2026-03-06T13:30:34.645 DEBUG:teuthology.orchestra.run.vm03:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-06T13:30:34.706 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-03-06T13:30:34.708 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-03-06T13:30:34.708 DEBUG:teuthology.orchestra.run.vm03:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-06T13:30:34.763 DEBUG:teuthology.orchestra.run.vm03:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-06T13:30:34.828 DEBUG:teuthology.orchestra.run.vm03:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-06T13:30:34.888 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-06T13:30:34.888 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-06T13:30:34.947 DEBUG:teuthology.orchestra.run.vm03:> sudo service rsyslog restart 2026-03-06T13:30:35.017 INFO:teuthology.orchestra.run.vm03.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-06T13:30:35.393 INFO:teuthology.run_tasks:Running task internal.timer... 2026-03-06T13:30:35.395 INFO:teuthology.task.internal:Starting timer... 2026-03-06T13:30:35.395 INFO:teuthology.run_tasks:Running task pcp... 2026-03-06T13:30:35.398 INFO:teuthology.run_tasks:Running task selinux... 2026-03-06T13:30:35.400 DEBUG:teuthology.task:Applying overrides for task selinux: {'allowlist': ['scontext=system_u:system_r:logrotate_t:s0', 'scontext=system_u:system_r:logrotate_t:s0', 'scontext=system_u:system_r:getty_t:s0']} 2026-03-06T13:30:35.400 INFO:teuthology.task.selinux:Excluding vm03: VMs are not yet supported 2026-03-06T13:30:35.400 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-03-06T13:30:35.400 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-03-06T13:30:35.400 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-03-06T13:30:35.400 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-03-06T13:30:35.404 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'repo': 'https://github.com/kshtsk/ceph-cm-ansible.git', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'timezone': 'Europe/Berlin'}} 2026-03-06T13:30:35.404 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_kshtsk_ceph-cm-ansible_main to origin/main 2026-03-06T13:30:35.411 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-03-06T13:30:35.411 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "timezone": "Europe/Berlin"}' -i /tmp/teuth_ansible_inventorybnm9wzdn --limit vm03.local /home/teuthos/src/github.com_kshtsk_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-03-06T13:32:19.662 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm03.local')] 2026-03-06T13:32:19.663 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm03.local' 2026-03-06T13:32:19.663 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm03.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-06T13:32:19.730 DEBUG:teuthology.orchestra.run.vm03:> true 2026-03-06T13:32:19.804 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm03.local' 2026-03-06T13:32:19.804 INFO:teuthology.run_tasks:Running task clock... 2026-03-06T13:32:19.807 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-03-06T13:32:19.807 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-06T13:32:19.807 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-06T13:32:19.884 INFO:teuthology.orchestra.run.vm03.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-03-06T13:32:19.906 INFO:teuthology.orchestra.run.vm03.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-03-06T13:32:19.940 INFO:teuthology.orchestra.run.vm03.stderr:sudo: ntpd: command not found 2026-03-06T13:32:19.952 INFO:teuthology.orchestra.run.vm03.stdout:506 Cannot talk to daemon 2026-03-06T13:32:19.971 INFO:teuthology.orchestra.run.vm03.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-03-06T13:32:19.994 INFO:teuthology.orchestra.run.vm03.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-03-06T13:32:20.042 INFO:teuthology.orchestra.run.vm03.stderr:bash: line 1: ntpq: command not found 2026-03-06T13:32:20.044 INFO:teuthology.orchestra.run.vm03.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-06T13:32:20.044 INFO:teuthology.orchestra.run.vm03.stdout:=============================================================================== 2026-03-06T13:32:20.044 INFO:teuthology.run_tasks:Running task pexec... 2026-03-06T13:32:20.047 INFO:teuthology.task.pexec:Executing custom commands... 2026-03-06T13:32:20.047 DEBUG:teuthology.orchestra.run.vm03:> TESTDIR=/home/ubuntu/cephtest bash -s 2026-03-06T13:32:20.086 DEBUG:teuthology.task.pexec:ubuntu@vm03.local< sudo dnf remove nvme-cli -y 2026-03-06T13:32:20.086 DEBUG:teuthology.task.pexec:ubuntu@vm03.local< sudo dnf install runc nvmetcli nvme-cli -y 2026-03-06T13:32:20.086 DEBUG:teuthology.task.pexec:ubuntu@vm03.local< sudo sed -i 's/^#runtime = "crun"/runtime = "runc"/g' /usr/share/containers/containers.conf 2026-03-06T13:32:20.087 DEBUG:teuthology.task.pexec:ubuntu@vm03.local< sudo sed -i 's/runtime = "crun"/#runtime = "crun"/g' /usr/share/containers/containers.conf 2026-03-06T13:32:20.087 INFO:teuthology.task.pexec:Running commands on host ubuntu@vm03.local 2026-03-06T13:32:20.087 INFO:teuthology.task.pexec:sudo dnf remove nvme-cli -y 2026-03-06T13:32:20.087 INFO:teuthology.task.pexec:sudo dnf install runc nvmetcli nvme-cli -y 2026-03-06T13:32:20.087 INFO:teuthology.task.pexec:sudo sed -i 's/^#runtime = "crun"/runtime = "runc"/g' /usr/share/containers/containers.conf 2026-03-06T13:32:20.087 INFO:teuthology.task.pexec:sudo sed -i 's/runtime = "crun"/#runtime = "crun"/g' /usr/share/containers/containers.conf 2026-03-06T13:32:20.315 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: nvme-cli 2026-03-06T13:32:20.315 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-06T13:32:20.321 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-06T13:32:20.321 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-06T13:32:20.321 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-06T13:32:20.815 INFO:teuthology.orchestra.run.vm03.stdout:Last metadata expiration check: 0:01:01 ago on Fri 06 Mar 2026 01:31:19 PM CET. 2026-03-06T13:32:20.949 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-06T13:32:20.949 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-06T13:32:20.949 INFO:teuthology.orchestra.run.vm03.stdout: Package Arch Version Repository Size 2026-03-06T13:32:20.949 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-06T13:32:20.949 INFO:teuthology.orchestra.run.vm03.stdout:Installing: 2026-03-06T13:32:20.949 INFO:teuthology.orchestra.run.vm03.stdout: nvme-cli x86_64 2.16-1.el9 baseos 1.2 M 2026-03-06T13:32:20.949 INFO:teuthology.orchestra.run.vm03.stdout: nvmetcli noarch 0.8-3.el9 baseos 44 k 2026-03-06T13:32:20.949 INFO:teuthology.orchestra.run.vm03.stdout: runc x86_64 4:1.4.0-2.el9 appstream 4.0 M 2026-03-06T13:32:20.949 INFO:teuthology.orchestra.run.vm03.stdout:Installing dependencies: 2026-03-06T13:32:20.949 INFO:teuthology.orchestra.run.vm03.stdout: python3-configshell noarch 1:1.1.30-1.el9 baseos 72 k 2026-03-06T13:32:20.949 INFO:teuthology.orchestra.run.vm03.stdout: python3-kmod x86_64 0.9-32.el9 baseos 84 k 2026-03-06T13:32:20.949 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyparsing noarch 2.4.7-9.el9 baseos 150 k 2026-03-06T13:32:20.949 INFO:teuthology.orchestra.run.vm03.stdout: python3-urwid x86_64 2.1.2-4.el9 baseos 837 k 2026-03-06T13:32:20.949 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:32:20.949 INFO:teuthology.orchestra.run.vm03.stdout:Transaction Summary 2026-03-06T13:32:20.949 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-06T13:32:20.949 INFO:teuthology.orchestra.run.vm03.stdout:Install 7 Packages 2026-03-06T13:32:20.950 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:32:20.950 INFO:teuthology.orchestra.run.vm03.stdout:Total download size: 6.3 M 2026-03-06T13:32:20.950 INFO:teuthology.orchestra.run.vm03.stdout:Installed size: 24 M 2026-03-06T13:32:20.950 INFO:teuthology.orchestra.run.vm03.stdout:Downloading Packages: 2026-03-06T13:32:26.032 INFO:teuthology.orchestra.run.vm03.stdout:(1/7): nvmetcli-0.8-3.el9.noarch.rpm 9.1 kB/s | 44 kB 00:04 2026-03-06T13:32:26.033 INFO:teuthology.orchestra.run.vm03.stdout:(2/7): python3-configshell-1.1.30-1.el9.noarch. 15 kB/s | 72 kB 00:04 2026-03-06T13:32:29.387 INFO:teuthology.orchestra.run.vm03.stdout:(3/7): python3-kmod-0.9-32.el9.x86_64.rpm 25 kB/s | 84 kB 00:03 2026-03-06T13:32:29.777 INFO:teuthology.orchestra.run.vm03.stdout:(4/7): python3-pyparsing-2.4.7-9.el9.noarch.rpm 40 kB/s | 150 kB 00:03 2026-03-06T13:32:29.781 INFO:teuthology.orchestra.run.vm03.stdout:(5/7): nvme-cli-2.16-1.el9.x86_64.rpm 138 kB/s | 1.2 MB 00:08 2026-03-06T13:32:30.326 INFO:teuthology.orchestra.run.vm03.stdout:(6/7): python3-urwid-2.1.2-4.el9.x86_64.rpm 892 kB/s | 837 kB 00:00 2026-03-06T13:32:30.557 INFO:teuthology.orchestra.run.vm03.stdout:(7/7): runc-1.4.0-2.el9.x86_64.rpm 5.1 MB/s | 4.0 MB 00:00 2026-03-06T13:32:30.559 INFO:teuthology.orchestra.run.vm03.stdout:-------------------------------------------------------------------------------- 2026-03-06T13:32:30.559 INFO:teuthology.orchestra.run.vm03.stdout:Total 669 kB/s | 6.3 MB 00:09 2026-03-06T13:32:30.655 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction check 2026-03-06T13:32:30.667 INFO:teuthology.orchestra.run.vm03.stdout:Transaction check succeeded. 2026-03-06T13:32:30.667 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction test 2026-03-06T13:32:30.743 INFO:teuthology.orchestra.run.vm03.stdout:Transaction test succeeded. 2026-03-06T13:32:30.744 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction 2026-03-06T13:32:30.950 INFO:teuthology.orchestra.run.vm03.stdout: Preparing : 1/1 2026-03-06T13:32:30.961 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-urwid-2.1.2-4.el9.x86_64 1/7 2026-03-06T13:32:30.975 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-pyparsing-2.4.7-9.el9.noarch 2/7 2026-03-06T13:32:30.986 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-configshell-1:1.1.30-1.el9.noarch 3/7 2026-03-06T13:32:30.998 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-kmod-0.9-32.el9.x86_64 4/7 2026-03-06T13:32:31.006 INFO:teuthology.orchestra.run.vm03.stdout: Installing : nvmetcli-0.8-3.el9.noarch 5/7 2026-03-06T13:32:31.055 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: nvmetcli-0.8-3.el9.noarch 5/7 2026-03-06T13:32:31.216 INFO:teuthology.orchestra.run.vm03.stdout: Installing : runc-4:1.4.0-2.el9.x86_64 6/7 2026-03-06T13:32:31.221 INFO:teuthology.orchestra.run.vm03.stdout: Installing : nvme-cli-2.16-1.el9.x86_64 7/7 2026-03-06T13:32:31.613 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: nvme-cli-2.16-1.el9.x86_64 7/7 2026-03-06T13:32:31.613 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /usr/lib/systemd/system/nvmefc-boot-connections.service. 2026-03-06T13:32:31.613 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:32:32.202 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : nvme-cli-2.16-1.el9.x86_64 1/7 2026-03-06T13:32:32.202 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : nvmetcli-0.8-3.el9.noarch 2/7 2026-03-06T13:32:32.202 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-configshell-1:1.1.30-1.el9.noarch 3/7 2026-03-06T13:32:32.202 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-kmod-0.9-32.el9.x86_64 4/7 2026-03-06T13:32:32.202 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.noarch 5/7 2026-03-06T13:32:32.202 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-urwid-2.1.2-4.el9.x86_64 6/7 2026-03-06T13:32:32.291 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : runc-4:1.4.0-2.el9.x86_64 7/7 2026-03-06T13:32:32.291 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:32:32.291 INFO:teuthology.orchestra.run.vm03.stdout:Installed: 2026-03-06T13:32:32.291 INFO:teuthology.orchestra.run.vm03.stdout: nvme-cli-2.16-1.el9.x86_64 nvmetcli-0.8-3.el9.noarch 2026-03-06T13:32:32.291 INFO:teuthology.orchestra.run.vm03.stdout: python3-configshell-1:1.1.30-1.el9.noarch python3-kmod-0.9-32.el9.x86_64 2026-03-06T13:32:32.291 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyparsing-2.4.7-9.el9.noarch python3-urwid-2.1.2-4.el9.x86_64 2026-03-06T13:32:32.291 INFO:teuthology.orchestra.run.vm03.stdout: runc-4:1.4.0-2.el9.x86_64 2026-03-06T13:32:32.291 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:32:32.291 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-06T13:32:32.429 DEBUG:teuthology.parallel:result is None 2026-03-06T13:32:32.429 INFO:teuthology.run_tasks:Running task install... 2026-03-06T13:32:32.431 DEBUG:teuthology.task.install:project ceph 2026-03-06T13:32:32.432 DEBUG:teuthology.task.install:INSTALL overrides: {'ceph': {'flavor': 'default', 'sha1': 'c24117fd5525679b799527bc1bd1f1dd0a2db5e2'}, 'extra_system_packages': {'deb': ['python3-xmltodict', 's3cmd'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 's3cmd']}, 'repos': [{'name': 'ceph-source', 'priority': 1, 'url': 'https://s3.clyso.com/ces-packages/components/ceph/rpm-19.2.3-47-gc24117fd552/el9.clyso/SRPMS'}, {'name': 'ceph-noarch', 'priority': 1, 'url': 'https://s3.clyso.com/ces-packages/components/ceph/rpm-19.2.3-47-gc24117fd552/el9.clyso/noarch'}, {'name': 'ceph', 'priority': 1, 'url': 'https://s3.clyso.com/ces-packages/components/ceph/rpm-19.2.3-47-gc24117fd552/el9.clyso/x86_64'}]} 2026-03-06T13:32:32.432 DEBUG:teuthology.task.install:config {'flavor': 'default', 'sha1': 'c24117fd5525679b799527bc1bd1f1dd0a2db5e2', 'extra_system_packages': {'deb': ['python3-xmltodict', 's3cmd'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 's3cmd']}} 2026-03-06T13:32:32.432 INFO:teuthology.task.install:Using flavor: default 2026-03-06T13:32:32.434 DEBUG:teuthology.task.install:Package list is: {'deb': ['ceph', 'cephadm', 'ceph-mds', 'ceph-mgr', 'ceph-common', 'ceph-fuse', 'ceph-test', 'ceph-volume', 'radosgw', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'libcephfs2', 'libcephfs-dev', 'librados2', 'librbd1', 'rbd-fuse'], 'rpm': ['ceph-radosgw', 'ceph-test', 'ceph', 'ceph-base', 'cephadm', 'ceph-immutable-object-cache', 'ceph-mgr', 'ceph-mgr-dashboard', 'ceph-mgr-diskprediction-local', 'ceph-mgr-rook', 'ceph-mgr-cephadm', 'ceph-fuse', 'ceph-volume', 'librados-devel', 'libcephfs2', 'libcephfs-devel', 'librados2', 'librbd1', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'rbd-fuse', 'rbd-mirror', 'rbd-nbd']} 2026-03-06T13:32:32.434 INFO:teuthology.task.install:extra packages: [] 2026-03-06T13:32:32.434 DEBUG:teuthology.task.install.rpm:_update_package_list_and_install: config is {'branch': None, 'cleanup': None, 'debuginfo': None, 'downgrade_packages': [], 'exclude_packages': [], 'extra_packages': [], 'extra_system_packages': {'deb': ['python3-xmltodict', 's3cmd'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 's3cmd']}, 'extras': None, 'enable_coprs': [], 'flavor': 'default', 'install_ceph_packages': True, 'packages': {}, 'project': 'ceph', 'repos_only': False, 'sha1': 'c24117fd5525679b799527bc1bd1f1dd0a2db5e2', 'tag': None, 'wait_for_package': False, 'repos': [{'name': 'ceph-source', 'priority': 1, 'url': 'https://s3.clyso.com/ces-packages/components/ceph/rpm-19.2.3-47-gc24117fd552/el9.clyso/SRPMS'}, {'name': 'ceph-noarch', 'priority': 1, 'url': 'https://s3.clyso.com/ces-packages/components/ceph/rpm-19.2.3-47-gc24117fd552/el9.clyso/noarch'}, {'name': 'ceph', 'priority': 1, 'url': 'https://s3.clyso.com/ces-packages/components/ceph/rpm-19.2.3-47-gc24117fd552/el9.clyso/x86_64'}]} 2026-03-06T13:32:32.434 DEBUG:teuthology.task.install.rpm:Adding repos: [{'name': 'ceph-source', 'priority': 1, 'url': 'https://s3.clyso.com/ces-packages/components/ceph/rpm-19.2.3-47-gc24117fd552/el9.clyso/SRPMS'}, {'name': 'ceph-noarch', 'priority': 1, 'url': 'https://s3.clyso.com/ces-packages/components/ceph/rpm-19.2.3-47-gc24117fd552/el9.clyso/noarch'}, {'name': 'ceph', 'priority': 1, 'url': 'https://s3.clyso.com/ces-packages/components/ceph/rpm-19.2.3-47-gc24117fd552/el9.clyso/x86_64'}] 2026-03-06T13:32:32.434 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-06T13:32:32.434 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/yum.repos.d/ceph-source.repo 2026-03-06T13:32:32.474 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-06T13:32:32.474 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/yum.repos.d/ceph-noarch.repo 2026-03-06T13:32:32.553 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-06T13:32:32.553 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/yum.repos.d/ceph.repo 2026-03-06T13:32:32.627 INFO:teuthology.task.install.rpm:Installing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd, bzip2, perl-Test-Harness, python3-xmltodict, s3cmd on remote rpm x86_64 2026-03-06T13:32:32.627 DEBUG:teuthology.orchestra.run.vm03:> sudo yum clean all 2026-03-06T13:32:32.806 INFO:teuthology.orchestra.run.vm03.stdout:41 files removed 2026-03-06T13:32:32.829 DEBUG:teuthology.orchestra.run.vm03:> sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd bzip2 perl-Test-Harness python3-xmltodict s3cmd 2026-03-06T13:32:33.291 INFO:teuthology.orchestra.run.vm03.stdout:ceph 301 kB/s | 86 kB 00:00 2026-03-06T13:32:33.575 INFO:teuthology.orchestra.run.vm03.stdout:ceph-noarch 45 kB/s | 12 kB 00:00 2026-03-06T13:32:33.914 INFO:teuthology.orchestra.run.vm03.stdout:ceph-source 7.0 kB/s | 2.2 kB 00:00 2026-03-06T13:32:35.163 INFO:teuthology.orchestra.run.vm03.stdout:CentOS Stream 9 - BaseOS 7.2 MB/s | 8.9 MB 00:01 2026-03-06T13:32:37.101 INFO:teuthology.orchestra.run.vm03.stdout:CentOS Stream 9 - AppStream 21 MB/s | 27 MB 00:01 2026-03-06T13:32:40.420 INFO:teuthology.orchestra.run.vm03.stdout:CentOS Stream 9 - CRB 12 MB/s | 8.0 MB 00:00 2026-03-06T13:32:42.851 INFO:teuthology.orchestra.run.vm03.stdout:CentOS Stream 9 - Extras packages 12 kB/s | 20 kB 00:01 2026-03-06T13:32:43.282 INFO:teuthology.orchestra.run.vm03.stdout:Extra Packages for Enterprise Linux 57 MB/s | 20 MB 00:00 2026-03-06T13:32:47.861 INFO:teuthology.orchestra.run.vm03.stdout:lab-extras 65 kB/s | 50 kB 00:00 2026-03-06T13:32:49.266 INFO:teuthology.orchestra.run.vm03.stdout:Package librados2-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-06T13:32:49.267 INFO:teuthology.orchestra.run.vm03.stdout:Package librbd1-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-06T13:32:49.304 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-06T13:32:49.309 INFO:teuthology.orchestra.run.vm03.stdout:============================================================================================== 2026-03-06T13:32:49.310 INFO:teuthology.orchestra.run.vm03.stdout: Package Arch Version Repository Size 2026-03-06T13:32:49.310 INFO:teuthology.orchestra.run.vm03.stdout:============================================================================================== 2026-03-06T13:32:49.310 INFO:teuthology.orchestra.run.vm03.stdout:Installing: 2026-03-06T13:32:49.310 INFO:teuthology.orchestra.run.vm03.stdout: bzip2 x86_64 1.0.8-11.el9 baseos 55 k 2026-03-06T13:32:49.310 INFO:teuthology.orchestra.run.vm03.stdout: ceph x86_64 2:19.2.3-47.gc24117fd552.el9.clyso ceph 6.5 k 2026-03-06T13:32:49.310 INFO:teuthology.orchestra.run.vm03.stdout: ceph-base x86_64 2:19.2.3-47.gc24117fd552.el9.clyso ceph 5.5 M 2026-03-06T13:32:49.310 INFO:teuthology.orchestra.run.vm03.stdout: ceph-fuse x86_64 2:19.2.3-47.gc24117fd552.el9.clyso ceph 1.1 M 2026-03-06T13:32:49.310 INFO:teuthology.orchestra.run.vm03.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-47.gc24117fd552.el9.clyso ceph 145 k 2026-03-06T13:32:49.310 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr x86_64 2:19.2.3-47.gc24117fd552.el9.clyso ceph 1.1 M 2026-03-06T13:32:49.310 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-cephadm noarch 2:19.2.3-47.gc24117fd552.el9.clyso ceph-noarch 150 k 2026-03-06T13:32:49.310 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-dashboard noarch 2:19.2.3-47.gc24117fd552.el9.clyso ceph-noarch 3.8 M 2026-03-06T13:32:49.310 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-47.gc24117fd552.el9.clyso ceph-noarch 7.4 M 2026-03-06T13:32:49.310 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-rook noarch 2:19.2.3-47.gc24117fd552.el9.clyso ceph-noarch 49 k 2026-03-06T13:32:49.310 INFO:teuthology.orchestra.run.vm03.stdout: ceph-radosgw x86_64 2:19.2.3-47.gc24117fd552.el9.clyso ceph 11 M 2026-03-06T13:32:49.310 INFO:teuthology.orchestra.run.vm03.stdout: ceph-test x86_64 2:19.2.3-47.gc24117fd552.el9.clyso ceph 50 M 2026-03-06T13:32:49.310 INFO:teuthology.orchestra.run.vm03.stdout: ceph-volume noarch 2:19.2.3-47.gc24117fd552.el9.clyso ceph-noarch 299 k 2026-03-06T13:32:49.310 INFO:teuthology.orchestra.run.vm03.stdout: cephadm noarch 2:19.2.3-47.gc24117fd552.el9.clyso ceph-noarch 769 k 2026-03-06T13:32:49.310 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs-devel x86_64 2:19.2.3-47.gc24117fd552.el9.clyso ceph 34 k 2026-03-06T13:32:49.310 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs2 x86_64 2:19.2.3-47.gc24117fd552.el9.clyso ceph 998 k 2026-03-06T13:32:49.310 INFO:teuthology.orchestra.run.vm03.stdout: librados-devel x86_64 2:19.2.3-47.gc24117fd552.el9.clyso ceph 127 k 2026-03-06T13:32:49.310 INFO:teuthology.orchestra.run.vm03.stdout: perl-Test-Harness noarch 1:3.42-461.el9 appstream 295 k 2026-03-06T13:32:49.310 INFO:teuthology.orchestra.run.vm03.stdout: python3-cephfs x86_64 2:19.2.3-47.gc24117fd552.el9.clyso ceph 165 k 2026-03-06T13:32:49.310 INFO:teuthology.orchestra.run.vm03.stdout: python3-rados x86_64 2:19.2.3-47.gc24117fd552.el9.clyso ceph 322 k 2026-03-06T13:32:49.310 INFO:teuthology.orchestra.run.vm03.stdout: python3-rbd x86_64 2:19.2.3-47.gc24117fd552.el9.clyso ceph 303 k 2026-03-06T13:32:49.310 INFO:teuthology.orchestra.run.vm03.stdout: python3-rgw x86_64 2:19.2.3-47.gc24117fd552.el9.clyso ceph 100 k 2026-03-06T13:32:49.310 INFO:teuthology.orchestra.run.vm03.stdout: python3-xmltodict noarch 0.12.0-15.el9 epel 22 k 2026-03-06T13:32:49.310 INFO:teuthology.orchestra.run.vm03.stdout: rbd-fuse x86_64 2:19.2.3-47.gc24117fd552.el9.clyso ceph 85 k 2026-03-06T13:32:49.310 INFO:teuthology.orchestra.run.vm03.stdout: rbd-mirror x86_64 2:19.2.3-47.gc24117fd552.el9.clyso ceph 3.1 M 2026-03-06T13:32:49.310 INFO:teuthology.orchestra.run.vm03.stdout: rbd-nbd x86_64 2:19.2.3-47.gc24117fd552.el9.clyso ceph 171 k 2026-03-06T13:32:49.310 INFO:teuthology.orchestra.run.vm03.stdout: s3cmd noarch 2.4.0-1.el9 epel 206 k 2026-03-06T13:32:49.310 INFO:teuthology.orchestra.run.vm03.stdout:Upgrading: 2026-03-06T13:32:49.310 INFO:teuthology.orchestra.run.vm03.stdout: librados2 x86_64 2:19.2.3-47.gc24117fd552.el9.clyso ceph 3.4 M 2026-03-06T13:32:49.310 INFO:teuthology.orchestra.run.vm03.stdout: librbd1 x86_64 2:19.2.3-47.gc24117fd552.el9.clyso ceph 3.2 M 2026-03-06T13:32:49.310 INFO:teuthology.orchestra.run.vm03.stdout:Installing dependencies: 2026-03-06T13:32:49.310 INFO:teuthology.orchestra.run.vm03.stdout: abseil-cpp x86_64 20211102.0-4.el9 epel 551 k 2026-03-06T13:32:49.310 INFO:teuthology.orchestra.run.vm03.stdout: boost-program-options x86_64 1.75.0-13.el9 appstream 104 k 2026-03-06T13:32:49.310 INFO:teuthology.orchestra.run.vm03.stdout: ceph-common x86_64 2:19.2.3-47.gc24117fd552.el9.clyso ceph 22 M 2026-03-06T13:32:49.310 INFO:teuthology.orchestra.run.vm03.stdout: ceph-grafana-dashboards noarch 2:19.2.3-47.gc24117fd552.el9.clyso ceph-noarch 31 k 2026-03-06T13:32:49.310 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mds x86_64 2:19.2.3-47.gc24117fd552.el9.clyso ceph 2.4 M 2026-03-06T13:32:49.310 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core noarch 2:19.2.3-47.gc24117fd552.el9.clyso ceph-noarch 252 k 2026-03-06T13:32:49.311 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mon x86_64 2:19.2.3-47.gc24117fd552.el9.clyso ceph 4.7 M 2026-03-06T13:32:49.311 INFO:teuthology.orchestra.run.vm03.stdout: ceph-osd x86_64 2:19.2.3-47.gc24117fd552.el9.clyso ceph 17 M 2026-03-06T13:32:49.311 INFO:teuthology.orchestra.run.vm03.stdout: ceph-prometheus-alerts noarch 2:19.2.3-47.gc24117fd552.el9.clyso ceph-noarch 16 k 2026-03-06T13:32:49.311 INFO:teuthology.orchestra.run.vm03.stdout: ceph-selinux x86_64 2:19.2.3-47.gc24117fd552.el9.clyso ceph 25 k 2026-03-06T13:32:49.311 INFO:teuthology.orchestra.run.vm03.stdout: cryptsetup x86_64 2.8.1-3.el9 baseos 351 k 2026-03-06T13:32:49.311 INFO:teuthology.orchestra.run.vm03.stdout: flexiblas x86_64 3.0.4-9.el9 appstream 30 k 2026-03-06T13:32:49.311 INFO:teuthology.orchestra.run.vm03.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 appstream 3.0 M 2026-03-06T13:32:49.311 INFO:teuthology.orchestra.run.vm03.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 appstream 15 k 2026-03-06T13:32:49.311 INFO:teuthology.orchestra.run.vm03.stdout: fuse x86_64 2.9.9-17.el9 baseos 80 k 2026-03-06T13:32:49.311 INFO:teuthology.orchestra.run.vm03.stdout: gperftools-libs x86_64 2.9.1-3.el9 epel 308 k 2026-03-06T13:32:49.311 INFO:teuthology.orchestra.run.vm03.stdout: grpc-data noarch 1.46.7-10.el9 epel 19 k 2026-03-06T13:32:49.311 INFO:teuthology.orchestra.run.vm03.stdout: ledmon-libs x86_64 1.1.0-3.el9 baseos 40 k 2026-03-06T13:32:49.311 INFO:teuthology.orchestra.run.vm03.stdout: libarrow x86_64 9.0.0-15.el9 epel 4.4 M 2026-03-06T13:32:49.311 INFO:teuthology.orchestra.run.vm03.stdout: libarrow-doc noarch 9.0.0-15.el9 epel 25 k 2026-03-06T13:32:49.311 INFO:teuthology.orchestra.run.vm03.stdout: libcephsqlite x86_64 2:19.2.3-47.gc24117fd552.el9.clyso ceph 163 k 2026-03-06T13:32:49.311 INFO:teuthology.orchestra.run.vm03.stdout: libconfig x86_64 1.7.2-9.el9 baseos 72 k 2026-03-06T13:32:49.311 INFO:teuthology.orchestra.run.vm03.stdout: libgfortran x86_64 11.5.0-14.el9 baseos 794 k 2026-03-06T13:32:49.311 INFO:teuthology.orchestra.run.vm03.stdout: libnbd x86_64 1.20.3-4.el9 appstream 164 k 2026-03-06T13:32:49.311 INFO:teuthology.orchestra.run.vm03.stdout: liboath x86_64 2.6.12-1.el9 epel 49 k 2026-03-06T13:32:49.311 INFO:teuthology.orchestra.run.vm03.stdout: libpmemobj x86_64 1.12.1-1.el9 appstream 160 k 2026-03-06T13:32:49.311 INFO:teuthology.orchestra.run.vm03.stdout: libquadmath x86_64 11.5.0-14.el9 baseos 184 k 2026-03-06T13:32:49.311 INFO:teuthology.orchestra.run.vm03.stdout: librabbitmq x86_64 0.11.0-7.el9 appstream 45 k 2026-03-06T13:32:49.311 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1 x86_64 2:19.2.3-47.gc24117fd552.el9.clyso ceph 503 k 2026-03-06T13:32:49.311 INFO:teuthology.orchestra.run.vm03.stdout: librdkafka x86_64 1.6.1-102.el9 appstream 662 k 2026-03-06T13:32:49.311 INFO:teuthology.orchestra.run.vm03.stdout: librgw2 x86_64 2:19.2.3-47.gc24117fd552.el9.clyso ceph 5.4 M 2026-03-06T13:32:49.311 INFO:teuthology.orchestra.run.vm03.stdout: libstoragemgmt x86_64 1.10.1-1.el9 appstream 246 k 2026-03-06T13:32:49.311 INFO:teuthology.orchestra.run.vm03.stdout: libunwind x86_64 1.6.2-1.el9 epel 67 k 2026-03-06T13:32:49.311 INFO:teuthology.orchestra.run.vm03.stdout: libxslt x86_64 1.1.34-12.el9 appstream 233 k 2026-03-06T13:32:49.311 INFO:teuthology.orchestra.run.vm03.stdout: lttng-ust x86_64 2.12.0-6.el9 appstream 292 k 2026-03-06T13:32:49.311 INFO:teuthology.orchestra.run.vm03.stdout: lua x86_64 5.4.4-4.el9 appstream 188 k 2026-03-06T13:32:49.311 INFO:teuthology.orchestra.run.vm03.stdout: lua-devel x86_64 5.4.4-4.el9 crb 22 k 2026-03-06T13:32:49.311 INFO:teuthology.orchestra.run.vm03.stdout: luarocks noarch 3.9.2-5.el9 epel 151 k 2026-03-06T13:32:49.311 INFO:teuthology.orchestra.run.vm03.stdout: mailcap noarch 2.1.49-5.el9 baseos 33 k 2026-03-06T13:32:49.311 INFO:teuthology.orchestra.run.vm03.stdout: openblas x86_64 0.3.29-1.el9 appstream 42 k 2026-03-06T13:32:49.311 INFO:teuthology.orchestra.run.vm03.stdout: openblas-openmp x86_64 0.3.29-1.el9 appstream 5.3 M 2026-03-06T13:32:49.311 INFO:teuthology.orchestra.run.vm03.stdout: parquet-libs x86_64 9.0.0-15.el9 epel 838 k 2026-03-06T13:32:49.311 INFO:teuthology.orchestra.run.vm03.stdout: pciutils x86_64 3.7.0-7.el9 baseos 93 k 2026-03-06T13:32:49.311 INFO:teuthology.orchestra.run.vm03.stdout: perl-Benchmark noarch 1.23-483.el9 appstream 26 k 2026-03-06T13:32:49.311 INFO:teuthology.orchestra.run.vm03.stdout: protobuf x86_64 3.14.0-17.el9 appstream 1.0 M 2026-03-06T13:32:49.311 INFO:teuthology.orchestra.run.vm03.stdout: protobuf-compiler x86_64 3.14.0-17.el9 crb 862 k 2026-03-06T13:32:49.311 INFO:teuthology.orchestra.run.vm03.stdout: python3-asyncssh noarch 2.13.2-5.el9 epel 548 k 2026-03-06T13:32:49.311 INFO:teuthology.orchestra.run.vm03.stdout: python3-autocommand noarch 2.2.2-8.el9 epel 29 k 2026-03-06T13:32:49.311 INFO:teuthology.orchestra.run.vm03.stdout: python3-babel noarch 2.9.1-2.el9 appstream 6.0 M 2026-03-06T13:32:49.312 INFO:teuthology.orchestra.run.vm03.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 epel 60 k 2026-03-06T13:32:49.312 INFO:teuthology.orchestra.run.vm03.stdout: python3-bcrypt x86_64 3.2.2-1.el9 epel 43 k 2026-03-06T13:32:49.312 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools noarch 4.2.4-1.el9 epel 32 k 2026-03-06T13:32:49.312 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-argparse x86_64 2:19.2.3-47.gc24117fd552.el9.clyso ceph 45 k 2026-03-06T13:32:49.312 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-common x86_64 2:19.2.3-47.gc24117fd552.el9.clyso ceph 142 k 2026-03-06T13:32:49.312 INFO:teuthology.orchestra.run.vm03.stdout: python3-certifi noarch 2023.05.07-4.el9 epel 14 k 2026-03-06T13:32:49.312 INFO:teuthology.orchestra.run.vm03.stdout: python3-cffi x86_64 1.14.5-5.el9 baseos 253 k 2026-03-06T13:32:49.312 INFO:teuthology.orchestra.run.vm03.stdout: python3-cheroot noarch 10.0.1-4.el9 epel 173 k 2026-03-06T13:32:49.312 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy noarch 18.6.1-2.el9 epel 358 k 2026-03-06T13:32:49.312 INFO:teuthology.orchestra.run.vm03.stdout: python3-cryptography x86_64 36.0.1-5.el9 baseos 1.2 M 2026-03-06T13:32:49.312 INFO:teuthology.orchestra.run.vm03.stdout: python3-devel x86_64 3.9.25-3.el9 appstream 244 k 2026-03-06T13:32:49.312 INFO:teuthology.orchestra.run.vm03.stdout: python3-google-auth noarch 1:2.45.0-1.el9 epel 254 k 2026-03-06T13:32:49.312 INFO:teuthology.orchestra.run.vm03.stdout: python3-grpcio x86_64 1.46.7-10.el9 epel 2.0 M 2026-03-06T13:32:49.312 INFO:teuthology.orchestra.run.vm03.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 epel 144 k 2026-03-06T13:32:49.312 INFO:teuthology.orchestra.run.vm03.stdout: python3-isodate noarch 0.6.1-3.el9 epel 56 k 2026-03-06T13:32:49.312 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco noarch 8.2.1-3.el9 epel 11 k 2026-03-06T13:32:49.312 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 epel 18 k 2026-03-06T13:32:49.312 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 epel 23 k 2026-03-06T13:32:49.312 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-context noarch 6.0.1-3.el9 epel 20 k 2026-03-06T13:32:49.312 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 epel 19 k 2026-03-06T13:32:49.312 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-text noarch 4.0.0-2.el9 epel 26 k 2026-03-06T13:32:49.312 INFO:teuthology.orchestra.run.vm03.stdout: python3-jinja2 noarch 2.11.3-8.el9 appstream 249 k 2026-03-06T13:32:49.312 INFO:teuthology.orchestra.run.vm03.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 epel 1.0 M 2026-03-06T13:32:49.312 INFO:teuthology.orchestra.run.vm03.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 appstream 177 k 2026-03-06T13:32:49.312 INFO:teuthology.orchestra.run.vm03.stdout: python3-logutils noarch 0.3.5-21.el9 epel 46 k 2026-03-06T13:32:49.312 INFO:teuthology.orchestra.run.vm03.stdout: python3-lxml x86_64 4.6.5-3.el9 appstream 1.2 M 2026-03-06T13:32:49.312 INFO:teuthology.orchestra.run.vm03.stdout: python3-mako noarch 1.1.4-6.el9 appstream 172 k 2026-03-06T13:32:49.312 INFO:teuthology.orchestra.run.vm03.stdout: python3-markupsafe x86_64 1.1.1-12.el9 appstream 35 k 2026-03-06T13:32:49.312 INFO:teuthology.orchestra.run.vm03.stdout: python3-more-itertools noarch 8.12.0-2.el9 epel 79 k 2026-03-06T13:32:49.312 INFO:teuthology.orchestra.run.vm03.stdout: python3-msgpack x86_64 1.0.3-2.el9 epel 86 k 2026-03-06T13:32:49.312 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort noarch 7.1.1-5.el9 epel 58 k 2026-03-06T13:32:49.312 INFO:teuthology.orchestra.run.vm03.stdout: python3-numpy x86_64 1:1.23.5-2.el9 appstream 6.1 M 2026-03-06T13:32:49.312 INFO:teuthology.orchestra.run.vm03.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 appstream 442 k 2026-03-06T13:32:49.312 INFO:teuthology.orchestra.run.vm03.stdout: python3-packaging noarch 20.9-5.el9 appstream 77 k 2026-03-06T13:32:49.312 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan noarch 1.4.2-3.el9 epel 272 k 2026-03-06T13:32:49.312 INFO:teuthology.orchestra.run.vm03.stdout: python3-ply noarch 3.11-14.el9 baseos 106 k 2026-03-06T13:32:49.312 INFO:teuthology.orchestra.run.vm03.stdout: python3-portend noarch 3.1.0-2.el9 epel 16 k 2026-03-06T13:32:49.312 INFO:teuthology.orchestra.run.vm03.stdout: python3-protobuf noarch 3.14.0-17.el9 appstream 267 k 2026-03-06T13:32:49.312 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 epel 90 k 2026-03-06T13:32:49.312 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyasn1 noarch 0.4.8-7.el9 appstream 157 k 2026-03-06T13:32:49.312 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 appstream 277 k 2026-03-06T13:32:49.312 INFO:teuthology.orchestra.run.vm03.stdout: python3-pycparser noarch 2.20-6.el9 baseos 135 k 2026-03-06T13:32:49.312 INFO:teuthology.orchestra.run.vm03.stdout: python3-repoze-lru noarch 0.7-16.el9 epel 31 k 2026-03-06T13:32:49.312 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests noarch 2.25.1-10.el9 baseos 126 k 2026-03-06T13:32:49.313 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 appstream 54 k 2026-03-06T13:32:49.313 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes noarch 2.5.1-5.el9 epel 188 k 2026-03-06T13:32:49.313 INFO:teuthology.orchestra.run.vm03.stdout: python3-rsa noarch 4.9-2.el9 epel 59 k 2026-03-06T13:32:49.313 INFO:teuthology.orchestra.run.vm03.stdout: python3-scipy x86_64 1.9.3-2.el9 appstream 19 M 2026-03-06T13:32:49.313 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora noarch 5.0.0-2.el9 epel 36 k 2026-03-06T13:32:49.313 INFO:teuthology.orchestra.run.vm03.stdout: python3-toml noarch 0.10.2-6.el9 appstream 42 k 2026-03-06T13:32:49.313 INFO:teuthology.orchestra.run.vm03.stdout: python3-typing-extensions noarch 4.15.0-1.el9 epel 86 k 2026-03-06T13:32:49.313 INFO:teuthology.orchestra.run.vm03.stdout: python3-urllib3 noarch 1.26.5-7.el9 baseos 218 k 2026-03-06T13:32:49.313 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob noarch 1.8.8-2.el9 epel 230 k 2026-03-06T13:32:49.313 INFO:teuthology.orchestra.run.vm03.stdout: python3-websocket-client noarch 1.2.3-2.el9 epel 90 k 2026-03-06T13:32:49.313 INFO:teuthology.orchestra.run.vm03.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 epel 427 k 2026-03-06T13:32:49.313 INFO:teuthology.orchestra.run.vm03.stdout: python3-xmlsec x86_64 1.3.13-1.el9 epel 48 k 2026-03-06T13:32:49.313 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc-lockfile noarch 2.0-10.el9 epel 20 k 2026-03-06T13:32:49.313 INFO:teuthology.orchestra.run.vm03.stdout: qatlib x86_64 25.08.0-2.el9 appstream 240 k 2026-03-06T13:32:49.313 INFO:teuthology.orchestra.run.vm03.stdout: qatzip-libs x86_64 1.3.1-1.el9 appstream 66 k 2026-03-06T13:32:49.313 INFO:teuthology.orchestra.run.vm03.stdout: re2 x86_64 1:20211101-20.el9 epel 191 k 2026-03-06T13:32:49.313 INFO:teuthology.orchestra.run.vm03.stdout: socat x86_64 1.7.4.1-8.el9 appstream 303 k 2026-03-06T13:32:49.313 INFO:teuthology.orchestra.run.vm03.stdout: thrift x86_64 0.15.0-4.el9 epel 1.6 M 2026-03-06T13:32:49.313 INFO:teuthology.orchestra.run.vm03.stdout: unzip x86_64 6.0-59.el9 baseos 182 k 2026-03-06T13:32:49.313 INFO:teuthology.orchestra.run.vm03.stdout: xmlsec1 x86_64 1.2.29-13.el9 appstream 189 k 2026-03-06T13:32:49.313 INFO:teuthology.orchestra.run.vm03.stdout: xmlsec1-openssl x86_64 1.2.29-13.el9 appstream 90 k 2026-03-06T13:32:49.313 INFO:teuthology.orchestra.run.vm03.stdout: xmlstarlet x86_64 1.6.1-20.el9 appstream 64 k 2026-03-06T13:32:49.313 INFO:teuthology.orchestra.run.vm03.stdout: zip x86_64 3.0-35.el9 baseos 266 k 2026-03-06T13:32:49.313 INFO:teuthology.orchestra.run.vm03.stdout:Installing weak dependencies: 2026-03-06T13:32:49.313 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-k8sevents noarch 2:19.2.3-47.gc24117fd552.el9.clyso ceph-noarch 22 k 2026-03-06T13:32:49.313 INFO:teuthology.orchestra.run.vm03.stdout: python3-influxdb noarch 5.3.1-1.el9 epel 139 k 2026-03-06T13:32:49.313 INFO:teuthology.orchestra.run.vm03.stdout: python3-saml noarch 1.16.0-1.el9 epel 125 k 2026-03-06T13:32:49.313 INFO:teuthology.orchestra.run.vm03.stdout: qatlib-service x86_64 25.08.0-2.el9 appstream 37 k 2026-03-06T13:32:49.313 INFO:teuthology.orchestra.run.vm03.stdout: smartmontools x86_64 1:7.2-10.el9 baseos 556 k 2026-03-06T13:32:49.313 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:32:49.313 INFO:teuthology.orchestra.run.vm03.stdout:Transaction Summary 2026-03-06T13:32:49.313 INFO:teuthology.orchestra.run.vm03.stdout:============================================================================================== 2026-03-06T13:32:49.313 INFO:teuthology.orchestra.run.vm03.stdout:Install 148 Packages 2026-03-06T13:32:49.313 INFO:teuthology.orchestra.run.vm03.stdout:Upgrade 2 Packages 2026-03-06T13:32:49.313 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:32:49.314 INFO:teuthology.orchestra.run.vm03.stdout:Total download size: 214 M 2026-03-06T13:32:49.314 INFO:teuthology.orchestra.run.vm03.stdout:Downloading Packages: 2026-03-06T13:32:50.640 INFO:teuthology.orchestra.run.vm03.stdout:(1/150): ceph-19.2.3-47.gc24117fd552.el9.clyso. 78 kB/s | 6.5 kB 00:00 2026-03-06T13:32:50.975 INFO:teuthology.orchestra.run.vm03.stdout:(2/150): ceph-fuse-19.2.3-47.gc24117fd552.el9.c 3.4 MB/s | 1.1 MB 00:00 2026-03-06T13:32:51.025 INFO:teuthology.orchestra.run.vm03.stdout:(3/150): ceph-immutable-object-cache-19.2.3-47. 2.9 MB/s | 145 kB 00:00 2026-03-06T13:32:51.043 INFO:teuthology.orchestra.run.vm03.stdout:(4/150): ceph-base-19.2.3-47.gc24117fd552.el9.c 11 MB/s | 5.5 MB 00:00 2026-03-06T13:32:51.157 INFO:teuthology.orchestra.run.vm03.stdout:(5/150): ceph-mgr-19.2.3-47.gc24117fd552.el9.cl 9.4 MB/s | 1.1 MB 00:00 2026-03-06T13:32:51.212 INFO:teuthology.orchestra.run.vm03.stdout:(6/150): ceph-mds-19.2.3-47.gc24117fd552.el9.cl 13 MB/s | 2.4 MB 00:00 2026-03-06T13:32:51.415 INFO:teuthology.orchestra.run.vm03.stdout:(7/150): ceph-common-19.2.3-47.gc24117fd552.el9 25 MB/s | 22 MB 00:00 2026-03-06T13:32:51.469 INFO:teuthology.orchestra.run.vm03.stdout:(8/150): ceph-mon-19.2.3-47.gc24117fd552.el9.cl 15 MB/s | 4.7 MB 00:00 2026-03-06T13:32:51.508 INFO:teuthology.orchestra.run.vm03.stdout:(9/150): ceph-selinux-19.2.3-47.gc24117fd552.el 644 kB/s | 25 kB 00:00 2026-03-06T13:32:51.872 INFO:teuthology.orchestra.run.vm03.stdout:(10/150): ceph-radosgw-19.2.3-47.gc24117fd552.e 24 MB/s | 11 MB 00:00 2026-03-06T13:32:51.929 INFO:teuthology.orchestra.run.vm03.stdout:(11/150): ceph-osd-19.2.3-47.gc24117fd552.el9.c 24 MB/s | 17 MB 00:00 2026-03-06T13:32:51.930 INFO:teuthology.orchestra.run.vm03.stdout:(12/150): libcephfs-devel-19.2.3-47.gc24117fd55 577 kB/s | 34 kB 00:00 2026-03-06T13:32:51.996 INFO:teuthology.orchestra.run.vm03.stdout:(13/150): libcephsqlite-19.2.3-47.gc24117fd552. 2.4 MB/s | 163 kB 00:00 2026-03-06T13:32:52.034 INFO:teuthology.orchestra.run.vm03.stdout:(14/150): libcephfs2-19.2.3-47.gc24117fd552.el9 9.3 MB/s | 998 kB 00:00 2026-03-06T13:32:52.043 INFO:teuthology.orchestra.run.vm03.stdout:(15/150): librados-devel-19.2.3-47.gc24117fd552 2.6 MB/s | 127 kB 00:00 2026-03-06T13:32:52.085 INFO:teuthology.orchestra.run.vm03.stdout:(16/150): libradosstriper1-19.2.3-47.gc24117fd5 9.6 MB/s | 503 kB 00:00 2026-03-06T13:32:52.135 INFO:teuthology.orchestra.run.vm03.stdout:(17/150): python3-ceph-argparse-19.2.3-47.gc241 909 kB/s | 45 kB 00:00 2026-03-06T13:32:52.184 INFO:teuthology.orchestra.run.vm03.stdout:(18/150): python3-ceph-common-19.2.3-47.gc24117 2.8 MB/s | 142 kB 00:00 2026-03-06T13:32:52.236 INFO:teuthology.orchestra.run.vm03.stdout:(19/150): python3-cephfs-19.2.3-47.gc24117fd552 3.1 MB/s | 165 kB 00:00 2026-03-06T13:32:52.297 INFO:teuthology.orchestra.run.vm03.stdout:(20/150): python3-rados-19.2.3-47.gc24117fd552. 5.2 MB/s | 322 kB 00:00 2026-03-06T13:32:52.320 INFO:teuthology.orchestra.run.vm03.stdout:(21/150): librgw2-19.2.3-47.gc24117fd552.el9.cl 20 MB/s | 5.4 MB 00:00 2026-03-06T13:32:52.336 INFO:teuthology.orchestra.run.vm03.stdout:(22/150): python3-rbd-19.2.3-47.gc24117fd552.el 7.6 MB/s | 303 kB 00:00 2026-03-06T13:32:52.351 INFO:teuthology.orchestra.run.vm03.stdout:(23/150): python3-rgw-19.2.3-47.gc24117fd552.el 2.9 MB/s | 100 kB 00:00 2026-03-06T13:32:52.372 INFO:teuthology.orchestra.run.vm03.stdout:(24/150): rbd-fuse-19.2.3-47.gc24117fd552.el9.c 2.3 MB/s | 85 kB 00:00 2026-03-06T13:32:52.422 INFO:teuthology.orchestra.run.vm03.stdout:(25/150): rbd-nbd-19.2.3-47.gc24117fd552.el9.cl 3.4 MB/s | 171 kB 00:00 2026-03-06T13:32:52.458 INFO:teuthology.orchestra.run.vm03.stdout:(26/150): ceph-grafana-dashboards-19.2.3-47.gc2 893 kB/s | 31 kB 00:00 2026-03-06T13:32:52.511 INFO:teuthology.orchestra.run.vm03.stdout:(27/150): ceph-mgr-cephadm-19.2.3-47.gc24117fd5 2.8 MB/s | 150 kB 00:00 2026-03-06T13:32:52.554 INFO:teuthology.orchestra.run.vm03.stdout:(28/150): rbd-mirror-19.2.3-47.gc24117fd552.el9 15 MB/s | 3.1 MB 00:00 2026-03-06T13:32:52.684 INFO:teuthology.orchestra.run.vm03.stdout:(29/150): ceph-test-19.2.3-47.gc24117fd552.el9. 42 MB/s | 50 MB 00:01 2026-03-06T13:32:52.719 INFO:teuthology.orchestra.run.vm03.stdout:(30/150): ceph-mgr-k8sevents-19.2.3-47.gc24117f 630 kB/s | 22 kB 00:00 2026-03-06T13:32:52.748 INFO:teuthology.orchestra.run.vm03.stdout:(31/150): ceph-mgr-dashboard-19.2.3-47.gc24117f 16 MB/s | 3.8 MB 00:00 2026-03-06T13:32:52.766 INFO:teuthology.orchestra.run.vm03.stdout:(32/150): ceph-mgr-modules-core-19.2.3-47.gc241 5.3 MB/s | 252 kB 00:00 2026-03-06T13:32:52.796 INFO:teuthology.orchestra.run.vm03.stdout:(33/150): ceph-mgr-rook-19.2.3-47.gc24117fd552. 1.0 MB/s | 49 kB 00:00 2026-03-06T13:32:52.802 INFO:teuthology.orchestra.run.vm03.stdout:(34/150): ceph-prometheus-alerts-19.2.3-47.gc24 456 kB/s | 16 kB 00:00 2026-03-06T13:32:53.030 INFO:teuthology.orchestra.run.vm03.stdout:(35/150): ceph-volume-19.2.3-47.gc24117fd552.el 1.2 MB/s | 299 kB 00:00 2026-03-06T13:32:53.049 INFO:teuthology.orchestra.run.vm03.stdout:(36/150): cephadm-19.2.3-47.gc24117fd552.el9.cl 3.0 MB/s | 769 kB 00:00 2026-03-06T13:32:53.050 INFO:teuthology.orchestra.run.vm03.stdout:(37/150): bzip2-1.0.8-11.el9.x86_64.rpm 2.8 MB/s | 55 kB 00:00 2026-03-06T13:32:53.074 INFO:teuthology.orchestra.run.vm03.stdout:(38/150): fuse-2.9.9-17.el9.x86_64.rpm 3.3 MB/s | 80 kB 00:00 2026-03-06T13:32:53.103 INFO:teuthology.orchestra.run.vm03.stdout:(39/150): ceph-mgr-diskprediction-local-19.2.3- 13 MB/s | 7.4 MB 00:00 2026-03-06T13:32:53.130 INFO:teuthology.orchestra.run.vm03.stdout:(40/150): libconfig-1.7.2-9.el9.x86_64.rpm 2.6 MB/s | 72 kB 00:00 2026-03-06T13:32:53.154 INFO:teuthology.orchestra.run.vm03.stdout:(41/150): ledmon-libs-1.1.0-3.el9.x86_64.rpm 510 kB/s | 40 kB 00:00 2026-03-06T13:32:53.186 INFO:teuthology.orchestra.run.vm03.stdout:(42/150): cryptsetup-2.8.1-3.el9.x86_64.rpm 2.5 MB/s | 351 kB 00:00 2026-03-06T13:32:53.190 INFO:teuthology.orchestra.run.vm03.stdout:(43/150): mailcap-2.1.49-5.el9.noarch.rpm 8.9 MB/s | 33 kB 00:00 2026-03-06T13:32:53.208 INFO:teuthology.orchestra.run.vm03.stdout:(44/150): libquadmath-11.5.0-14.el9.x86_64.rpm 3.3 MB/s | 184 kB 00:00 2026-03-06T13:32:53.211 INFO:teuthology.orchestra.run.vm03.stdout:(45/150): pciutils-3.7.0-7.el9.x86_64.rpm 4.4 MB/s | 93 kB 00:00 2026-03-06T13:32:53.316 INFO:teuthology.orchestra.run.vm03.stdout:(46/150): python3-cryptography-36.0.1-5.el9.x86 12 MB/s | 1.2 MB 00:00 2026-03-06T13:32:53.338 INFO:teuthology.orchestra.run.vm03.stdout:(47/150): libgfortran-11.5.0-14.el9.x86_64.rpm 3.7 MB/s | 794 kB 00:00 2026-03-06T13:32:53.347 INFO:teuthology.orchestra.run.vm03.stdout:(48/150): python3-ply-3.11-14.el9.noarch.rpm 3.4 MB/s | 106 kB 00:00 2026-03-06T13:32:53.353 INFO:teuthology.orchestra.run.vm03.stdout:(49/150): python3-requests-2.25.1-10.el9.noarch 23 MB/s | 126 kB 00:00 2026-03-06T13:32:53.359 INFO:teuthology.orchestra.run.vm03.stdout:(50/150): python3-urllib3-1.26.5-7.el9.noarch.r 34 MB/s | 218 kB 00:00 2026-03-06T13:32:53.366 INFO:teuthology.orchestra.run.vm03.stdout:(51/150): python3-pycparser-2.20-6.el9.noarch.r 4.9 MB/s | 135 kB 00:00 2026-03-06T13:32:53.388 INFO:teuthology.orchestra.run.vm03.stdout:(52/150): unzip-6.0-59.el9.x86_64.rpm 8.0 MB/s | 182 kB 00:00 2026-03-06T13:32:53.389 INFO:teuthology.orchestra.run.vm03.stdout:(53/150): python3-cffi-1.14.5-5.el9.x86_64.rpm 1.4 MB/s | 253 kB 00:00 2026-03-06T13:32:53.396 INFO:teuthology.orchestra.run.vm03.stdout:(54/150): zip-3.0-35.el9.x86_64.rpm 36 MB/s | 266 kB 00:00 2026-03-06T13:32:53.466 INFO:teuthology.orchestra.run.vm03.stdout:(55/150): smartmontools-7.2-10.el9.x86_64.rpm 5.1 MB/s | 556 kB 00:00 2026-03-06T13:32:53.585 INFO:teuthology.orchestra.run.vm03.stdout:(56/150): flexiblas-3.0.4-9.el9.x86_64.rpm 157 kB/s | 30 kB 00:00 2026-03-06T13:32:53.645 INFO:teuthology.orchestra.run.vm03.stdout:(57/150): flexiblas-openblas-openmp-3.0.4-9.el9 250 kB/s | 15 kB 00:00 2026-03-06T13:32:53.667 INFO:teuthology.orchestra.run.vm03.stdout:(58/150): boost-program-options-1.75.0-13.el9.x 375 kB/s | 104 kB 00:00 2026-03-06T13:32:53.799 INFO:teuthology.orchestra.run.vm03.stdout:(59/150): libpmemobj-1.12.1-1.el9.x86_64.rpm 1.2 MB/s | 160 kB 00:00 2026-03-06T13:32:53.825 INFO:teuthology.orchestra.run.vm03.stdout:(60/150): libnbd-1.20.3-4.el9.x86_64.rpm 909 kB/s | 164 kB 00:00 2026-03-06T13:32:53.876 INFO:teuthology.orchestra.run.vm03.stdout:(61/150): librabbitmq-0.11.0-7.el9.x86_64.rpm 588 kB/s | 45 kB 00:00 2026-03-06T13:32:53.957 INFO:teuthology.orchestra.run.vm03.stdout:(62/150): libstoragemgmt-1.10.1-1.el9.x86_64.rp 3.0 MB/s | 246 kB 00:00 2026-03-06T13:32:54.017 INFO:teuthology.orchestra.run.vm03.stdout:(63/150): libxslt-1.1.34-12.el9.x86_64.rpm 3.9 MB/s | 233 kB 00:00 2026-03-06T13:32:54.151 INFO:teuthology.orchestra.run.vm03.stdout:(64/150): lttng-ust-2.12.0-6.el9.x86_64.rpm 2.1 MB/s | 292 kB 00:00 2026-03-06T13:32:54.187 INFO:teuthology.orchestra.run.vm03.stdout:(65/150): librdkafka-1.6.1-102.el9.x86_64.rpm 1.8 MB/s | 662 kB 00:00 2026-03-06T13:32:54.209 INFO:teuthology.orchestra.run.vm03.stdout:(66/150): lua-5.4.4-4.el9.x86_64.rpm 3.2 MB/s | 188 kB 00:00 2026-03-06T13:32:54.258 INFO:teuthology.orchestra.run.vm03.stdout:(67/150): openblas-0.3.29-1.el9.x86_64.rpm 595 kB/s | 42 kB 00:00 2026-03-06T13:32:54.313 INFO:teuthology.orchestra.run.vm03.stdout:(68/150): perl-Benchmark-1.23-483.el9.noarch.rp 475 kB/s | 26 kB 00:00 2026-03-06T13:32:54.365 INFO:teuthology.orchestra.run.vm03.stdout:(69/150): flexiblas-netlib-3.0.4-9.el9.x86_64.r 3.3 MB/s | 3.0 MB 00:00 2026-03-06T13:32:54.479 INFO:teuthology.orchestra.run.vm03.stdout:(70/150): perl-Test-Harness-3.42-461.el9.noarch 1.7 MB/s | 295 kB 00:00 2026-03-06T13:32:54.750 INFO:teuthology.orchestra.run.vm03.stdout:(71/150): protobuf-3.14.0-17.el9.x86_64.rpm 2.6 MB/s | 1.0 MB 00:00 2026-03-06T13:32:54.862 INFO:teuthology.orchestra.run.vm03.stdout:(72/150): python3-devel-3.9.25-3.el9.x86_64.rpm 2.1 MB/s | 244 kB 00:00 2026-03-06T13:32:54.974 INFO:teuthology.orchestra.run.vm03.stdout:(73/150): python3-jinja2-2.11.3-8.el9.noarch.rp 2.2 MB/s | 249 kB 00:00 2026-03-06T13:32:55.112 INFO:teuthology.orchestra.run.vm03.stdout:(74/150): python3-libstoragemgmt-1.10.1-1.el9.x 1.3 MB/s | 177 kB 00:00 2026-03-06T13:32:55.554 INFO:teuthology.orchestra.run.vm03.stdout:(75/150): python3-lxml-4.6.5-3.el9.x86_64.rpm 2.8 MB/s | 1.2 MB 00:00 2026-03-06T13:32:55.622 INFO:teuthology.orchestra.run.vm03.stdout:(76/150): python3-mako-1.1.4-6.el9.noarch.rpm 2.5 MB/s | 172 kB 00:00 2026-03-06T13:32:55.698 INFO:teuthology.orchestra.run.vm03.stdout:(77/150): python3-markupsafe-1.1.1-12.el9.x86_6 458 kB/s | 35 kB 00:00 2026-03-06T13:32:56.320 INFO:teuthology.orchestra.run.vm03.stdout:(78/150): openblas-openmp-0.3.29-1.el9.x86_64.r 2.5 MB/s | 5.3 MB 00:02 2026-03-06T13:32:56.892 INFO:teuthology.orchestra.run.vm03.stdout:(79/150): python3-numpy-f2py-1.23.5-2.el9.x86_6 773 kB/s | 442 kB 00:00 2026-03-06T13:32:57.023 INFO:teuthology.orchestra.run.vm03.stdout:(80/150): python3-packaging-20.9-5.el9.noarch.r 590 kB/s | 77 kB 00:00 2026-03-06T13:32:57.394 INFO:teuthology.orchestra.run.vm03.stdout:(81/150): python3-protobuf-3.14.0-17.el9.noarch 722 kB/s | 267 kB 00:00 2026-03-06T13:32:57.739 INFO:teuthology.orchestra.run.vm03.stdout:(82/150): python3-pyasn1-0.4.8-7.el9.noarch.rpm 456 kB/s | 157 kB 00:00 2026-03-06T13:32:58.196 INFO:teuthology.orchestra.run.vm03.stdout:(83/150): python3-pyasn1-modules-0.4.8-7.el9.no 610 kB/s | 277 kB 00:00 2026-03-06T13:32:58.322 INFO:teuthology.orchestra.run.vm03.stdout:(84/150): python3-requests-oauthlib-1.3.0-12.el 427 kB/s | 54 kB 00:00 2026-03-06T13:32:59.639 INFO:teuthology.orchestra.run.vm03.stdout:(85/150): python3-numpy-1.23.5-2.el9.x86_64.rpm 1.6 MB/s | 6.1 MB 00:03 2026-03-06T13:32:59.715 INFO:teuthology.orchestra.run.vm03.stdout:(86/150): python3-toml-0.10.2-6.el9.noarch.rpm 546 kB/s | 42 kB 00:00 2026-03-06T13:32:59.956 INFO:teuthology.orchestra.run.vm03.stdout:(87/150): qatlib-25.08.0-2.el9.x86_64.rpm 998 kB/s | 240 kB 00:00 2026-03-06T13:33:00.025 INFO:teuthology.orchestra.run.vm03.stdout:(88/150): qatlib-service-25.08.0-2.el9.x86_64.r 538 kB/s | 37 kB 00:00 2026-03-06T13:33:00.097 INFO:teuthology.orchestra.run.vm03.stdout:(89/150): qatzip-libs-1.3.1-1.el9.x86_64.rpm 927 kB/s | 66 kB 00:00 2026-03-06T13:33:00.318 INFO:teuthology.orchestra.run.vm03.stdout:(90/150): socat-1.7.4.1-8.el9.x86_64.rpm 1.3 MB/s | 303 kB 00:00 2026-03-06T13:33:00.525 INFO:teuthology.orchestra.run.vm03.stdout:(91/150): xmlsec1-1.2.29-13.el9.x86_64.rpm 912 kB/s | 189 kB 00:00 2026-03-06T13:33:00.666 INFO:teuthology.orchestra.run.vm03.stdout:(92/150): xmlsec1-openssl-1.2.29-13.el9.x86_64. 641 kB/s | 90 kB 00:00 2026-03-06T13:33:00.745 INFO:teuthology.orchestra.run.vm03.stdout:(93/150): python3-babel-2.9.1-2.el9.noarch.rpm 975 kB/s | 6.0 MB 00:06 2026-03-06T13:33:00.801 INFO:teuthology.orchestra.run.vm03.stdout:(94/150): xmlstarlet-1.6.1-20.el9.x86_64.rpm 474 kB/s | 64 kB 00:00 2026-03-06T13:33:00.929 INFO:teuthology.orchestra.run.vm03.stdout:(95/150): lua-devel-5.4.4-4.el9.x86_64.rpm 121 kB/s | 22 kB 00:00 2026-03-06T13:33:00.947 INFO:teuthology.orchestra.run.vm03.stdout:(96/150): abseil-cpp-20211102.0-4.el9.x86_64.rp 30 MB/s | 551 kB 00:00 2026-03-06T13:33:00.955 INFO:teuthology.orchestra.run.vm03.stdout:(97/150): gperftools-libs-2.9.1-3.el9.x86_64.rp 36 MB/s | 308 kB 00:00 2026-03-06T13:33:00.958 INFO:teuthology.orchestra.run.vm03.stdout:(98/150): grpc-data-1.46.7-10.el9.noarch.rpm 6.9 MB/s | 19 kB 00:00 2026-03-06T13:33:01.016 INFO:teuthology.orchestra.run.vm03.stdout:(99/150): libarrow-9.0.0-15.el9.x86_64.rpm 77 MB/s | 4.4 MB 00:00 2026-03-06T13:33:01.019 INFO:teuthology.orchestra.run.vm03.stdout:(100/150): libarrow-doc-9.0.0-15.el9.noarch.rpm 9.1 MB/s | 25 kB 00:00 2026-03-06T13:33:01.022 INFO:teuthology.orchestra.run.vm03.stdout:(101/150): liboath-2.6.12-1.el9.x86_64.rpm 18 MB/s | 49 kB 00:00 2026-03-06T13:33:01.025 INFO:teuthology.orchestra.run.vm03.stdout:(102/150): libunwind-1.6.2-1.el9.x86_64.rpm 22 MB/s | 67 kB 00:00 2026-03-06T13:33:01.030 INFO:teuthology.orchestra.run.vm03.stdout:(103/150): luarocks-3.9.2-5.el9.noarch.rpm 33 MB/s | 151 kB 00:00 2026-03-06T13:33:01.044 INFO:teuthology.orchestra.run.vm03.stdout:(104/150): parquet-libs-9.0.0-15.el9.x86_64.rpm 59 MB/s | 838 kB 00:00 2026-03-06T13:33:01.053 INFO:teuthology.orchestra.run.vm03.stdout:(105/150): python3-asyncssh-2.13.2-5.el9.noarch 63 MB/s | 548 kB 00:00 2026-03-06T13:33:01.056 INFO:teuthology.orchestra.run.vm03.stdout:(106/150): python3-autocommand-2.2.2-8.el9.noar 12 MB/s | 29 kB 00:00 2026-03-06T13:33:01.059 INFO:teuthology.orchestra.run.vm03.stdout:(107/150): python3-backports-tarfile-1.2.0-1.el 21 MB/s | 60 kB 00:00 2026-03-06T13:33:01.061 INFO:teuthology.orchestra.run.vm03.stdout:(108/150): python3-bcrypt-3.2.2-1.el9.x86_64.rp 17 MB/s | 43 kB 00:00 2026-03-06T13:33:01.064 INFO:teuthology.orchestra.run.vm03.stdout:(109/150): python3-cachetools-4.2.4-1.el9.noarc 14 MB/s | 32 kB 00:00 2026-03-06T13:33:01.067 INFO:teuthology.orchestra.run.vm03.stdout:(110/150): python3-certifi-2023.05.07-4.el9.noa 5.0 MB/s | 14 kB 00:00 2026-03-06T13:33:01.072 INFO:teuthology.orchestra.run.vm03.stdout:(111/150): python3-cheroot-10.0.1-4.el9.noarch. 37 MB/s | 173 kB 00:00 2026-03-06T13:33:01.078 INFO:teuthology.orchestra.run.vm03.stdout:(112/150): python3-cherrypy-18.6.1-2.el9.noarch 59 MB/s | 358 kB 00:00 2026-03-06T13:33:01.083 INFO:teuthology.orchestra.run.vm03.stdout:(113/150): python3-google-auth-2.45.0-1.el9.noa 52 MB/s | 254 kB 00:00 2026-03-06T13:33:01.109 INFO:teuthology.orchestra.run.vm03.stdout:(114/150): python3-grpcio-1.46.7-10.el9.x86_64. 78 MB/s | 2.0 MB 00:00 2026-03-06T13:33:01.114 INFO:teuthology.orchestra.run.vm03.stdout:(115/150): protobuf-compiler-3.14.0-17.el9.x86_ 2.7 MB/s | 862 kB 00:00 2026-03-06T13:33:01.115 INFO:teuthology.orchestra.run.vm03.stdout:(116/150): python3-grpcio-tools-1.46.7-10.el9.x 25 MB/s | 144 kB 00:00 2026-03-06T13:33:01.119 INFO:teuthology.orchestra.run.vm03.stdout:(117/150): python3-isodate-0.6.1-3.el9.noarch.r 19 MB/s | 56 kB 00:00 2026-03-06T13:33:01.121 INFO:teuthology.orchestra.run.vm03.stdout:(118/150): python3-jaraco-8.2.1-3.el9.noarch.rp 4.9 MB/s | 11 kB 00:00 2026-03-06T13:33:01.122 INFO:teuthology.orchestra.run.vm03.stdout:(119/150): python3-influxdb-5.3.1-1.el9.noarch. 17 MB/s | 139 kB 00:00 2026-03-06T13:33:01.124 INFO:teuthology.orchestra.run.vm03.stdout:(120/150): python3-jaraco-classes-3.2.1-5.el9.n 7.4 MB/s | 18 kB 00:00 2026-03-06T13:33:01.125 INFO:teuthology.orchestra.run.vm03.stdout:(121/150): python3-jaraco-collections-3.0.0-8.e 10 MB/s | 23 kB 00:00 2026-03-06T13:33:01.127 INFO:teuthology.orchestra.run.vm03.stdout:(122/150): python3-jaraco-context-6.0.1-3.el9.n 6.7 MB/s | 20 kB 00:00 2026-03-06T13:33:01.128 INFO:teuthology.orchestra.run.vm03.stdout:(123/150): python3-jaraco-functools-3.5.0-2.el9 7.0 MB/s | 19 kB 00:00 2026-03-06T13:33:01.130 INFO:teuthology.orchestra.run.vm03.stdout:(124/150): python3-jaraco-text-4.0.0-2.el9.noar 11 MB/s | 26 kB 00:00 2026-03-06T13:33:01.132 INFO:teuthology.orchestra.run.vm03.stdout:(125/150): python3-logutils-0.3.5-21.el9.noarch 18 MB/s | 46 kB 00:00 2026-03-06T13:33:01.136 INFO:teuthology.orchestra.run.vm03.stdout:(126/150): python3-more-itertools-8.12.0-2.el9. 25 MB/s | 79 kB 00:00 2026-03-06T13:33:01.139 INFO:teuthology.orchestra.run.vm03.stdout:(127/150): python3-msgpack-1.0.3-2.el9.x86_64.r 29 MB/s | 86 kB 00:00 2026-03-06T13:33:01.142 INFO:teuthology.orchestra.run.vm03.stdout:(128/150): python3-natsort-7.1.1-5.el9.noarch.r 19 MB/s | 58 kB 00:00 2026-03-06T13:33:01.148 INFO:teuthology.orchestra.run.vm03.stdout:(129/150): python3-pecan-1.4.2-3.el9.noarch.rpm 44 MB/s | 272 kB 00:00 2026-03-06T13:33:01.151 INFO:teuthology.orchestra.run.vm03.stdout:(130/150): python3-kubernetes-26.1.0-3.el9.noar 45 MB/s | 1.0 MB 00:00 2026-03-06T13:33:01.151 INFO:teuthology.orchestra.run.vm03.stdout:(131/150): python3-portend-3.1.0-2.el9.noarch.r 5.4 MB/s | 16 kB 00:00 2026-03-06T13:33:01.154 INFO:teuthology.orchestra.run.vm03.stdout:(132/150): python3-pyOpenSSL-21.0.0-1.el9.noarc 31 MB/s | 90 kB 00:00 2026-03-06T13:33:01.155 INFO:teuthology.orchestra.run.vm03.stdout:(133/150): python3-repoze-lru-0.7-16.el9.noarch 7.5 MB/s | 31 kB 00:00 2026-03-06T13:33:01.158 INFO:teuthology.orchestra.run.vm03.stdout:(134/150): python3-routes-2.5.1-5.el9.noarch.rp 44 MB/s | 188 kB 00:00 2026-03-06T13:33:01.159 INFO:teuthology.orchestra.run.vm03.stdout:(135/150): python3-rsa-4.9-2.el9.noarch.rpm 15 MB/s | 59 kB 00:00 2026-03-06T13:33:01.162 INFO:teuthology.orchestra.run.vm03.stdout:(136/150): python3-saml-1.16.0-1.el9.noarch.rpm 32 MB/s | 125 kB 00:00 2026-03-06T13:33:01.162 INFO:teuthology.orchestra.run.vm03.stdout:(137/150): python3-tempora-5.0.0-2.el9.noarch.r 12 MB/s | 36 kB 00:00 2026-03-06T13:33:01.165 INFO:teuthology.orchestra.run.vm03.stdout:(138/150): python3-typing-extensions-4.15.0-1.e 30 MB/s | 86 kB 00:00 2026-03-06T13:33:01.168 INFO:teuthology.orchestra.run.vm03.stdout:(139/150): python3-websocket-client-1.2.3-2.el9 30 MB/s | 90 kB 00:00 2026-03-06T13:33:01.169 INFO:teuthology.orchestra.run.vm03.stdout:(140/150): python3-webob-1.8.8-2.el9.noarch.rpm 36 MB/s | 230 kB 00:00 2026-03-06T13:33:01.172 INFO:teuthology.orchestra.run.vm03.stdout:(141/150): python3-xmlsec-1.3.13-1.el9.x86_64.r 18 MB/s | 48 kB 00:00 2026-03-06T13:33:01.174 INFO:teuthology.orchestra.run.vm03.stdout:(142/150): python3-xmltodict-0.12.0-15.el9.noar 8.1 MB/s | 22 kB 00:00 2026-03-06T13:33:01.176 INFO:teuthology.orchestra.run.vm03.stdout:(143/150): python3-werkzeug-2.0.3-3.el9.1.noarc 50 MB/s | 427 kB 00:00 2026-03-06T13:33:01.177 INFO:teuthology.orchestra.run.vm03.stdout:(144/150): python3-zc-lockfile-2.0-10.el9.noarc 8.3 MB/s | 20 kB 00:00 2026-03-06T13:33:01.182 INFO:teuthology.orchestra.run.vm03.stdout:(145/150): re2-20211101-20.el9.x86_64.rpm 35 MB/s | 191 kB 00:00 2026-03-06T13:33:01.183 INFO:teuthology.orchestra.run.vm03.stdout:(146/150): s3cmd-2.4.0-1.el9.noarch.rpm 35 MB/s | 206 kB 00:00 2026-03-06T13:33:01.203 INFO:teuthology.orchestra.run.vm03.stdout:(147/150): thrift-0.15.0-4.el9.x86_64.rpm 76 MB/s | 1.6 MB 00:00 2026-03-06T13:33:01.591 INFO:teuthology.orchestra.run.vm03.stdout:(148/150): librbd1-19.2.3-47.gc24117fd552.el9.c 8.2 MB/s | 3.2 MB 00:00 2026-03-06T13:33:01.632 INFO:teuthology.orchestra.run.vm03.stdout:(149/150): librados2-19.2.3-47.gc24117fd552.el9 7.7 MB/s | 3.4 MB 00:00 2026-03-06T13:33:17.764 INFO:teuthology.orchestra.run.vm03.stdout:(150/150): python3-scipy-1.9.3-2.el9.x86_64.rpm 1.0 MB/s | 19 MB 00:19 2026-03-06T13:33:17.765 INFO:teuthology.orchestra.run.vm03.stdout:-------------------------------------------------------------------------------- 2026-03-06T13:33:17.765 INFO:teuthology.orchestra.run.vm03.stdout:Total 7.5 MB/s | 214 MB 00:28 2026-03-06T13:33:18.309 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction check 2026-03-06T13:33:18.365 INFO:teuthology.orchestra.run.vm03.stdout:Transaction check succeeded. 2026-03-06T13:33:18.366 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction test 2026-03-06T13:33:19.206 INFO:teuthology.orchestra.run.vm03.stdout:Transaction test succeeded. 2026-03-06T13:33:19.206 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction 2026-03-06T13:33:20.183 INFO:teuthology.orchestra.run.vm03.stdout: Preparing : 1/1 2026-03-06T13:33:20.197 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-more-itertools-8.12.0-2.el9.noarch 1/152 2026-03-06T13:33:20.210 INFO:teuthology.orchestra.run.vm03.stdout: Installing : thrift-0.15.0-4.el9.x86_64 2/152 2026-03-06T13:33:20.380 INFO:teuthology.orchestra.run.vm03.stdout: Installing : lttng-ust-2.12.0-6.el9.x86_64 3/152 2026-03-06T13:33:20.383 INFO:teuthology.orchestra.run.vm03.stdout: Upgrading : librados2-2:19.2.3-47.gc24117fd552.el9.clyso.x86 4/152 2026-03-06T13:33:20.440 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: librados2-2:19.2.3-47.gc24117fd552.el9.clyso.x86 4/152 2026-03-06T13:33:20.442 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libcephfs2-2:19.2.3-47.gc24117fd552.el9.clyso.x8 5/152 2026-03-06T13:33:20.469 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: libcephfs2-2:19.2.3-47.gc24117fd552.el9.clyso.x8 5/152 2026-03-06T13:33:20.475 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-rados-2:19.2.3-47.gc24117fd552.el9.clyso 6/152 2026-03-06T13:33:20.484 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libxslt-1.1.34-12.el9.x86_64 7/152 2026-03-06T13:33:20.487 INFO:teuthology.orchestra.run.vm03.stdout: Installing : librdkafka-1.6.1-102.el9.x86_64 8/152 2026-03-06T13:33:20.489 INFO:teuthology.orchestra.run.vm03.stdout: Installing : librabbitmq-0.11.0-7.el9.x86_64 9/152 2026-03-06T13:33:20.494 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-jaraco-8.2.1-3.el9.noarch 10/152 2026-03-06T13:33:20.533 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libnbd-1.20.3-4.el9.x86_64 11/152 2026-03-06T13:33:20.541 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-lxml-4.6.5-3.el9.x86_64 12/152 2026-03-06T13:33:20.549 INFO:teuthology.orchestra.run.vm03.stdout: Installing : xmlsec1-1.2.29-13.el9.x86_64 13/152 2026-03-06T13:33:20.551 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libcephsqlite-2:19.2.3-47.gc24117fd552.el9.clyso 14/152 2026-03-06T13:33:20.584 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: libcephsqlite-2:19.2.3-47.gc24117fd552.el9.clyso 14/152 2026-03-06T13:33:20.585 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libradosstriper1-2:19.2.3-47.gc24117fd552.el9.cl 15/152 2026-03-06T13:33:20.602 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: libradosstriper1-2:19.2.3-47.gc24117fd552.el9.cl 15/152 2026-03-06T13:33:20.636 INFO:teuthology.orchestra.run.vm03.stdout: Installing : re2-1:20211101-20.el9.x86_64 16/152 2026-03-06T13:33:20.705 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libarrow-9.0.0-15.el9.x86_64 17/152 2026-03-06T13:33:20.917 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-werkzeug-2.0.3-3.el9.1.noarch 18/152 2026-03-06T13:33:20.955 INFO:teuthology.orchestra.run.vm03.stdout: Installing : liboath-2.6.12-1.el9.x86_64 19/152 2026-03-06T13:33:20.991 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-pyasn1-0.4.8-7.el9.noarch 20/152 2026-03-06T13:33:20.999 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-packaging-20.9-5.el9.noarch 21/152 2026-03-06T13:33:21.009 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-markupsafe-1.1.1-12.el9.x86_64 22/152 2026-03-06T13:33:21.016 INFO:teuthology.orchestra.run.vm03.stdout: Installing : protobuf-3.14.0-17.el9.x86_64 23/152 2026-03-06T13:33:21.019 INFO:teuthology.orchestra.run.vm03.stdout: Installing : lua-5.4.4-4.el9.x86_64 24/152 2026-03-06T13:33:21.025 INFO:teuthology.orchestra.run.vm03.stdout: Installing : flexiblas-3.0.4-9.el9.x86_64 25/152 2026-03-06T13:33:21.052 INFO:teuthology.orchestra.run.vm03.stdout: Installing : unzip-6.0-59.el9.x86_64 26/152 2026-03-06T13:33:21.068 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-urllib3-1.26.5-7.el9.noarch 27/152 2026-03-06T13:33:21.073 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-requests-2.25.1-10.el9.noarch 28/152 2026-03-06T13:33:21.080 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libquadmath-11.5.0-14.el9.x86_64 29/152 2026-03-06T13:33:21.083 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libgfortran-11.5.0-14.el9.x86_64 30/152 2026-03-06T13:33:21.113 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ledmon-libs-1.1.0-3.el9.x86_64 31/152 2026-03-06T13:33:21.119 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-ceph-common-2:19.2.3-47.gc24117fd552.el9 32/152 2026-03-06T13:33:21.129 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-ceph-argparse-2:19.2.3-47.gc24117fd552.e 33/152 2026-03-06T13:33:21.143 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-cephfs-2:19.2.3-47.gc24117fd552.el9.clys 34/152 2026-03-06T13:33:21.151 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-requests-oauthlib-1.3.0-12.el9.noarch 35/152 2026-03-06T13:33:21.180 INFO:teuthology.orchestra.run.vm03.stdout: Installing : zip-3.0-35.el9.x86_64 36/152 2026-03-06T13:33:21.185 INFO:teuthology.orchestra.run.vm03.stdout: Installing : luarocks-3.9.2-5.el9.noarch 37/152 2026-03-06T13:33:21.194 INFO:teuthology.orchestra.run.vm03.stdout: Installing : lua-devel-5.4.4-4.el9.x86_64 38/152 2026-03-06T13:33:21.222 INFO:teuthology.orchestra.run.vm03.stdout: Installing : protobuf-compiler-3.14.0-17.el9.x86_64 39/152 2026-03-06T13:33:21.284 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-mako-1.1.4-6.el9.noarch 40/152 2026-03-06T13:33:21.300 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-pyasn1-modules-0.4.8-7.el9.noarch 41/152 2026-03-06T13:33:21.304 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-rsa-4.9-2.el9.noarch 42/152 2026-03-06T13:33:21.310 INFO:teuthology.orchestra.run.vm03.stdout: Installing : xmlsec1-openssl-1.2.29-13.el9.x86_64 43/152 2026-03-06T13:33:21.316 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-xmlsec-1.3.13-1.el9.x86_64 44/152 2026-03-06T13:33:21.322 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-jaraco-classes-3.2.1-5.el9.noarch 45/152 2026-03-06T13:33:21.330 INFO:teuthology.orchestra.run.vm03.stdout: Installing : xmlstarlet-1.6.1-20.el9.x86_64 46/152 2026-03-06T13:33:21.336 INFO:teuthology.orchestra.run.vm03.stdout: Installing : librados-devel-2:19.2.3-47.gc24117fd552.el9.clys 47/152 2026-03-06T13:33:21.340 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-zc-lockfile-2.0-10.el9.noarch 48/152 2026-03-06T13:33:21.357 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-xmltodict-0.12.0-15.el9.noarch 49/152 2026-03-06T13:33:21.383 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-websocket-client-1.2.3-2.el9.noarch 50/152 2026-03-06T13:33:21.389 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-webob-1.8.8-2.el9.noarch 51/152 2026-03-06T13:33:21.396 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-typing-extensions-4.15.0-1.el9.noarch 52/152 2026-03-06T13:33:21.409 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-repoze-lru-0.7-16.el9.noarch 53/152 2026-03-06T13:33:21.420 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-routes-2.5.1-5.el9.noarch 54/152 2026-03-06T13:33:21.428 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-natsort-7.1.1-5.el9.noarch 55/152 2026-03-06T13:33:21.453 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-msgpack-1.0.3-2.el9.x86_64 56/152 2026-03-06T13:33:21.473 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-influxdb-5.3.1-1.el9.noarch 57/152 2026-03-06T13:33:21.538 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-logutils-0.3.5-21.el9.noarch 58/152 2026-03-06T13:33:21.554 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-pecan-1.4.2-3.el9.noarch 59/152 2026-03-06T13:33:21.571 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-isodate-0.6.1-3.el9.noarch 60/152 2026-03-06T13:33:21.578 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-saml-1.16.0-1.el9.noarch 61/152 2026-03-06T13:33:21.587 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-certifi-2023.05.07-4.el9.noarch 62/152 2026-03-06T13:33:21.632 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-cachetools-4.2.4-1.el9.noarch 63/152 2026-03-06T13:33:21.994 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-google-auth-1:2.45.0-1.el9.noarch 64/152 2026-03-06T13:33:22.009 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-kubernetes-1:26.1.0-3.el9.noarch 65/152 2026-03-06T13:33:22.015 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-backports-tarfile-1.2.0-1.el9.noarch 66/152 2026-03-06T13:33:22.022 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-jaraco-context-6.0.1-3.el9.noarch 67/152 2026-03-06T13:33:22.026 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-autocommand-2.2.2-8.el9.noarch 68/152 2026-03-06T13:33:22.033 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libunwind-1.6.2-1.el9.x86_64 69/152 2026-03-06T13:33:22.036 INFO:teuthology.orchestra.run.vm03.stdout: Installing : gperftools-libs-2.9.1-3.el9.x86_64 70/152 2026-03-06T13:33:22.038 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libarrow-doc-9.0.0-15.el9.noarch 71/152 2026-03-06T13:33:22.066 INFO:teuthology.orchestra.run.vm03.stdout: Installing : grpc-data-1.46.7-10.el9.noarch 72/152 2026-03-06T13:33:22.114 INFO:teuthology.orchestra.run.vm03.stdout: Installing : abseil-cpp-20211102.0-4.el9.x86_64 73/152 2026-03-06T13:33:22.127 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-grpcio-1.46.7-10.el9.x86_64 74/152 2026-03-06T13:33:22.134 INFO:teuthology.orchestra.run.vm03.stdout: Installing : socat-1.7.4.1-8.el9.x86_64 75/152 2026-03-06T13:33:22.138 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-toml-0.10.2-6.el9.noarch 76/152 2026-03-06T13:33:22.145 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-jaraco-functools-3.5.0-2.el9.noarch 77/152 2026-03-06T13:33:22.150 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-jaraco-text-4.0.0-2.el9.noarch 78/152 2026-03-06T13:33:22.159 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-jaraco-collections-3.0.0-8.el9.noarch 79/152 2026-03-06T13:33:22.164 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-tempora-5.0.0-2.el9.noarch 80/152 2026-03-06T13:33:22.194 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-portend-3.1.0-2.el9.noarch 81/152 2026-03-06T13:33:22.207 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-protobuf-3.14.0-17.el9.noarch 82/152 2026-03-06T13:33:22.246 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-grpcio-tools-1.46.7-10.el9.x86_64 83/152 2026-03-06T13:33:22.496 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-devel-3.9.25-3.el9.x86_64 84/152 2026-03-06T13:33:22.526 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-babel-2.9.1-2.el9.noarch 85/152 2026-03-06T13:33:22.529 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-jinja2-2.11.3-8.el9.noarch 86/152 2026-03-06T13:33:22.533 INFO:teuthology.orchestra.run.vm03.stdout: Installing : perl-Benchmark-1.23-483.el9.noarch 87/152 2026-03-06T13:33:22.591 INFO:teuthology.orchestra.run.vm03.stdout: Installing : openblas-0.3.29-1.el9.x86_64 88/152 2026-03-06T13:33:22.594 INFO:teuthology.orchestra.run.vm03.stdout: Installing : openblas-openmp-0.3.29-1.el9.x86_64 89/152 2026-03-06T13:33:22.616 INFO:teuthology.orchestra.run.vm03.stdout: Installing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 90/152 2026-03-06T13:33:22.989 INFO:teuthology.orchestra.run.vm03.stdout: Installing : flexiblas-netlib-3.0.4-9.el9.x86_64 91/152 2026-03-06T13:33:23.074 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-numpy-1:1.23.5-2.el9.x86_64 92/152 2026-03-06T13:33:23.867 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 93/152 2026-03-06T13:33:23.892 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-scipy-1.9.3-2.el9.x86_64 94/152 2026-03-06T13:33:24.053 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libpmemobj-1.12.1-1.el9.x86_64 95/152 2026-03-06T13:33:24.056 INFO:teuthology.orchestra.run.vm03.stdout: Upgrading : librbd1-2:19.2.3-47.gc24117fd552.el9.clyso.x86_6 96/152 2026-03-06T13:33:24.086 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: librbd1-2:19.2.3-47.gc24117fd552.el9.clyso.x86_6 96/152 2026-03-06T13:33:24.091 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-rbd-2:19.2.3-47.gc24117fd552.el9.clyso.x 97/152 2026-03-06T13:33:24.101 INFO:teuthology.orchestra.run.vm03.stdout: Installing : boost-program-options-1.75.0-13.el9.x86_64 98/152 2026-03-06T13:33:24.360 INFO:teuthology.orchestra.run.vm03.stdout: Installing : parquet-libs-9.0.0-15.el9.x86_64 99/152 2026-03-06T13:33:24.362 INFO:teuthology.orchestra.run.vm03.stdout: Installing : librgw2-2:19.2.3-47.gc24117fd552.el9.clyso.x86_6 100/152 2026-03-06T13:33:24.381 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: librgw2-2:19.2.3-47.gc24117fd552.el9.clyso.x86_6 100/152 2026-03-06T13:33:24.383 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-rgw-2:19.2.3-47.gc24117fd552.el9.clyso.x 101/152 2026-03-06T13:33:25.485 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-common-2:19.2.3-47.gc24117fd552.el9.clyso.x 102/152 2026-03-06T13:33:25.490 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-common-2:19.2.3-47.gc24117fd552.el9.clyso.x 102/152 2026-03-06T13:33:25.515 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-common-2:19.2.3-47.gc24117fd552.el9.clyso.x 102/152 2026-03-06T13:33:25.518 INFO:teuthology.orchestra.run.vm03.stdout: Installing : smartmontools-1:7.2-10.el9.x86_64 103/152 2026-03-06T13:33:25.530 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: smartmontools-1:7.2-10.el9.x86_64 103/152 2026-03-06T13:33:25.530 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/smartd.service → /usr/lib/systemd/system/smartd.service. 2026-03-06T13:33:25.530 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:33:25.554 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-ply-3.11-14.el9.noarch 104/152 2026-03-06T13:33:25.574 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-pycparser-2.20-6.el9.noarch 105/152 2026-03-06T13:33:25.664 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-cffi-1.14.5-5.el9.x86_64 106/152 2026-03-06T13:33:25.678 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-cryptography-36.0.1-5.el9.x86_64 107/152 2026-03-06T13:33:25.708 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-pyOpenSSL-21.0.0-1.el9.noarch 108/152 2026-03-06T13:33:25.749 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-cheroot-10.0.1-4.el9.noarch 109/152 2026-03-06T13:33:25.818 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-cherrypy-18.6.1-2.el9.noarch 110/152 2026-03-06T13:33:25.859 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-asyncssh-2.13.2-5.el9.noarch 111/152 2026-03-06T13:33:25.865 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-bcrypt-3.2.2-1.el9.x86_64 112/152 2026-03-06T13:33:25.871 INFO:teuthology.orchestra.run.vm03.stdout: Installing : pciutils-3.7.0-7.el9.x86_64 113/152 2026-03-06T13:33:25.875 INFO:teuthology.orchestra.run.vm03.stdout: Installing : qatlib-25.08.0-2.el9.x86_64 114/152 2026-03-06T13:33:25.877 INFO:teuthology.orchestra.run.vm03.stdout: Installing : qatlib-service-25.08.0-2.el9.x86_64 115/152 2026-03-06T13:33:25.893 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 115/152 2026-03-06T13:33:26.207 INFO:teuthology.orchestra.run.vm03.stdout: Installing : qatzip-libs-1.3.1-1.el9.x86_64 116/152 2026-03-06T13:33:26.214 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-base-2:19.2.3-47.gc24117fd552.el9.clyso.x86 117/152 2026-03-06T13:33:26.279 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-base-2:19.2.3-47.gc24117fd552.el9.clyso.x86 117/152 2026-03-06T13:33:26.279 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /usr/lib/systemd/system/ceph.target. 2026-03-06T13:33:26.279 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /usr/lib/systemd/system/ceph-crash.service. 2026-03-06T13:33:26.279 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:33:26.285 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-selinux-2:19.2.3-47.gc24117fd552.el9.clyso. 118/152 2026-03-06T13:33:32.461 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-selinux-2:19.2.3-47.gc24117fd552.el9.clyso. 118/152 2026-03-06T13:33:32.461 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /sys 2026-03-06T13:33:32.461 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /proc 2026-03-06T13:33:32.461 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /mnt 2026-03-06T13:33:32.461 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /var/tmp 2026-03-06T13:33:32.461 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /home 2026-03-06T13:33:32.461 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /root 2026-03-06T13:33:32.461 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /tmp 2026-03-06T13:33:32.461 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:33:32.580 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-mds-2:19.2.3-47.gc24117fd552.el9.clyso.x86_ 119/152 2026-03-06T13:33:32.606 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mds-2:19.2.3-47.gc24117fd552.el9.clyso.x86_ 119/152 2026-03-06T13:33:32.606 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-06T13:33:32.606 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-06T13:33:32.606 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-06T13:33:32.606 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-06T13:33:32.606 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:33:32.832 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-mon-2:19.2.3-47.gc24117fd552.el9.clyso.x86_ 120/152 2026-03-06T13:33:32.853 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mon-2:19.2.3-47.gc24117fd552.el9.clyso.x86_ 120/152 2026-03-06T13:33:32.853 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-06T13:33:32.853 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-06T13:33:32.853 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-06T13:33:32.853 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-06T13:33:32.853 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:33:32.861 INFO:teuthology.orchestra.run.vm03.stdout: Installing : mailcap-2.1.49-5.el9.noarch 121/152 2026-03-06T13:33:32.864 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libconfig-1.7.2-9.el9.x86_64 122/152 2026-03-06T13:33:32.882 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 123/152 2026-03-06T13:33:32.882 INFO:teuthology.orchestra.run.vm03.stdout:Creating group 'qat' with GID 994. 2026-03-06T13:33:32.882 INFO:teuthology.orchestra.run.vm03.stdout:Creating group 'libstoragemgmt' with GID 993. 2026-03-06T13:33:32.882 INFO:teuthology.orchestra.run.vm03.stdout:Creating user 'libstoragemgmt' (daemon account for libstoragemgmt) with UID 993 and GID 993. 2026-03-06T13:33:32.882 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:33:32.892 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libstoragemgmt-1.10.1-1.el9.x86_64 123/152 2026-03-06T13:33:32.919 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 123/152 2026-03-06T13:33:32.919 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/libstoragemgmt.service → /usr/lib/systemd/system/libstoragemgmt.service. 2026-03-06T13:33:32.919 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:33:32.940 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 124/152 2026-03-06T13:33:32.967 INFO:teuthology.orchestra.run.vm03.stdout: Installing : fuse-2.9.9-17.el9.x86_64 125/152 2026-03-06T13:33:33.040 INFO:teuthology.orchestra.run.vm03.stdout: Installing : cryptsetup-2.8.1-3.el9.x86_64 126/152 2026-03-06T13:33:33.045 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-volume-2:19.2.3-47.gc24117fd552.el9.clyso.n 127/152 2026-03-06T13:33:33.059 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-volume-2:19.2.3-47.gc24117fd552.el9.clyso.n 127/152 2026-03-06T13:33:33.059 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-06T13:33:33.059 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-06T13:33:33.059 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:33:33.851 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-osd-2:19.2.3-47.gc24117fd552.el9.clyso.x86_ 128/152 2026-03-06T13:33:33.875 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-osd-2:19.2.3-47.gc24117fd552.el9.clyso.x86_ 128/152 2026-03-06T13:33:33.875 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-06T13:33:33.875 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-06T13:33:33.875 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-06T13:33:33.875 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-06T13:33:33.875 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:33:33.933 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: cephadm-2:19.2.3-47.gc24117fd552.el9.clyso.noarc 129/152 2026-03-06T13:33:33.937 INFO:teuthology.orchestra.run.vm03.stdout: Installing : cephadm-2:19.2.3-47.gc24117fd552.el9.clyso.noarc 129/152 2026-03-06T13:33:33.943 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-prometheus-alerts-2:19.2.3-47.gc24117fd552. 130/152 2026-03-06T13:33:33.966 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-grafana-dashboards-2:19.2.3-47.gc24117fd552 131/152 2026-03-06T13:33:33.970 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-mgr-cephadm-2:19.2.3-47.gc24117fd552.el9.cl 132/152 2026-03-06T13:33:34.517 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-47.gc24117fd552.el9.cl 132/152 2026-03-06T13:33:34.524 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-mgr-dashboard-2:19.2.3-47.gc24117fd552.el9. 133/152 2026-03-06T13:33:35.035 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-47.gc24117fd552.el9. 133/152 2026-03-06T13:33:35.038 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-mgr-diskprediction-local-2:19.2.3-47.gc2411 134/152 2026-03-06T13:33:35.050 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-47.gc2411 134/152 2026-03-06T13:33:35.051 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-mgr-k8sevents-2:19.2.3-47.gc24117fd552.el9. 135/152 2026-03-06T13:33:35.109 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mgr-k8sevents-2:19.2.3-47.gc24117fd552.el9. 135/152 2026-03-06T13:33:35.165 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-mgr-modules-core-2:19.2.3-47.gc24117fd552.e 136/152 2026-03-06T13:33:35.167 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-mgr-2:19.2.3-47.gc24117fd552.el9.clyso.x86_ 137/152 2026-03-06T13:33:35.189 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mgr-2:19.2.3-47.gc24117fd552.el9.clyso.x86_ 137/152 2026-03-06T13:33:35.189 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-06T13:33:35.189 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-06T13:33:35.189 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-06T13:33:35.189 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-06T13:33:35.189 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:33:35.203 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-mgr-rook-2:19.2.3-47.gc24117fd552.el9.clyso 138/152 2026-03-06T13:33:35.214 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-47.gc24117fd552.el9.clyso 138/152 2026-03-06T13:33:35.269 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 139/152 2026-03-06T13:33:35.782 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-fuse-2:19.2.3-47.gc24117fd552.el9.clyso.x86 140/152 2026-03-06T13:33:35.785 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-radosgw-2:19.2.3-47.gc24117fd552.el9.clyso. 141/152 2026-03-06T13:33:35.805 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-47.gc24117fd552.el9.clyso. 141/152 2026-03-06T13:33:35.805 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-06T13:33:35.805 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-06T13:33:35.805 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-06T13:33:35.805 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-06T13:33:35.805 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:33:35.816 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-immutable-object-cache-2:19.2.3-47.gc24117f 142/152 2026-03-06T13:33:35.835 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-47.gc24117f 142/152 2026-03-06T13:33:35.835 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-06T13:33:35.835 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-06T13:33:35.835 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:33:35.987 INFO:teuthology.orchestra.run.vm03.stdout: Installing : rbd-mirror-2:19.2.3-47.gc24117fd552.el9.clyso.x8 143/152 2026-03-06T13:33:36.007 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: rbd-mirror-2:19.2.3-47.gc24117fd552.el9.clyso.x8 143/152 2026-03-06T13:33:36.007 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-06T13:33:36.007 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-06T13:33:36.007 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-06T13:33:36.007 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-06T13:33:36.007 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:33:38.556 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-test-2:19.2.3-47.gc24117fd552.el9.clyso.x86 144/152 2026-03-06T13:33:38.566 INFO:teuthology.orchestra.run.vm03.stdout: Installing : rbd-fuse-2:19.2.3-47.gc24117fd552.el9.clyso.x86_ 145/152 2026-03-06T13:33:38.599 INFO:teuthology.orchestra.run.vm03.stdout: Installing : rbd-nbd-2:19.2.3-47.gc24117fd552.el9.clyso.x86_6 146/152 2026-03-06T13:33:38.607 INFO:teuthology.orchestra.run.vm03.stdout: Installing : perl-Test-Harness-1:3.42-461.el9.noarch 147/152 2026-03-06T13:33:38.624 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libcephfs-devel-2:19.2.3-47.gc24117fd552.el9.cly 148/152 2026-03-06T13:33:38.632 INFO:teuthology.orchestra.run.vm03.stdout: Installing : s3cmd-2.4.0-1.el9.noarch 149/152 2026-03-06T13:33:38.636 INFO:teuthology.orchestra.run.vm03.stdout: Installing : bzip2-1.0.8-11.el9.x86_64 150/152 2026-03-06T13:33:38.636 INFO:teuthology.orchestra.run.vm03.stdout: Cleanup : librbd1-2:16.2.4-5.el9.x86_64 151/152 2026-03-06T13:33:38.650 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: librbd1-2:16.2.4-5.el9.x86_64 151/152 2026-03-06T13:33:38.650 INFO:teuthology.orchestra.run.vm03.stdout: Cleanup : librados2-2:16.2.4-5.el9.x86_64 152/152 2026-03-06T13:33:40.048 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: librados2-2:16.2.4-5.el9.x86_64 152/152 2026-03-06T13:33:40.048 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 1/152 2026-03-06T13:33:40.048 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-base-2:19.2.3-47.gc24117fd552.el9.clyso.x86 2/152 2026-03-06T13:33:40.048 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-common-2:19.2.3-47.gc24117fd552.el9.clyso.x 3/152 2026-03-06T13:33:40.048 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-fuse-2:19.2.3-47.gc24117fd552.el9.clyso.x86 4/152 2026-03-06T13:33:40.048 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-47.gc24117f 5/152 2026-03-06T13:33:40.048 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mds-2:19.2.3-47.gc24117fd552.el9.clyso.x86_ 6/152 2026-03-06T13:33:40.048 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-2:19.2.3-47.gc24117fd552.el9.clyso.x86_ 7/152 2026-03-06T13:33:40.048 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mon-2:19.2.3-47.gc24117fd552.el9.clyso.x86_ 8/152 2026-03-06T13:33:40.048 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-osd-2:19.2.3-47.gc24117fd552.el9.clyso.x86_ 9/152 2026-03-06T13:33:40.048 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-radosgw-2:19.2.3-47.gc24117fd552.el9.clyso. 10/152 2026-03-06T13:33:40.048 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-selinux-2:19.2.3-47.gc24117fd552.el9.clyso. 11/152 2026-03-06T13:33:40.048 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-test-2:19.2.3-47.gc24117fd552.el9.clyso.x86 12/152 2026-03-06T13:33:40.048 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libcephfs-devel-2:19.2.3-47.gc24117fd552.el9.cly 13/152 2026-03-06T13:33:40.049 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libcephfs2-2:19.2.3-47.gc24117fd552.el9.clyso.x8 14/152 2026-03-06T13:33:40.049 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libcephsqlite-2:19.2.3-47.gc24117fd552.el9.clyso 15/152 2026-03-06T13:33:40.049 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librados-devel-2:19.2.3-47.gc24117fd552.el9.clys 16/152 2026-03-06T13:33:40.049 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libradosstriper1-2:19.2.3-47.gc24117fd552.el9.cl 17/152 2026-03-06T13:33:40.049 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librgw2-2:19.2.3-47.gc24117fd552.el9.clyso.x86_6 18/152 2026-03-06T13:33:40.049 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-ceph-argparse-2:19.2.3-47.gc24117fd552.e 19/152 2026-03-06T13:33:40.049 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-ceph-common-2:19.2.3-47.gc24117fd552.el9 20/152 2026-03-06T13:33:40.049 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-cephfs-2:19.2.3-47.gc24117fd552.el9.clys 21/152 2026-03-06T13:33:40.049 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-rados-2:19.2.3-47.gc24117fd552.el9.clyso 22/152 2026-03-06T13:33:40.049 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-rbd-2:19.2.3-47.gc24117fd552.el9.clyso.x 23/152 2026-03-06T13:33:40.049 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-rgw-2:19.2.3-47.gc24117fd552.el9.clyso.x 24/152 2026-03-06T13:33:40.049 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : rbd-fuse-2:19.2.3-47.gc24117fd552.el9.clyso.x86_ 25/152 2026-03-06T13:33:40.049 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : rbd-mirror-2:19.2.3-47.gc24117fd552.el9.clyso.x8 26/152 2026-03-06T13:33:40.049 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : rbd-nbd-2:19.2.3-47.gc24117fd552.el9.clyso.x86_6 27/152 2026-03-06T13:33:40.049 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-47.gc24117fd552 28/152 2026-03-06T13:33:40.049 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-47.gc24117fd552.el9.cl 29/152 2026-03-06T13:33:40.049 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-47.gc24117fd552.el9. 30/152 2026-03-06T13:33:40.049 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-47.gc2411 31/152 2026-03-06T13:33:40.049 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-k8sevents-2:19.2.3-47.gc24117fd552.el9. 32/152 2026-03-06T13:33:40.049 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-47.gc24117fd552.e 33/152 2026-03-06T13:33:40.049 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-rook-2:19.2.3-47.gc24117fd552.el9.clyso 34/152 2026-03-06T13:33:40.050 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-47.gc24117fd552. 35/152 2026-03-06T13:33:40.050 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-volume-2:19.2.3-47.gc24117fd552.el9.clyso.n 36/152 2026-03-06T13:33:40.050 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : cephadm-2:19.2.3-47.gc24117fd552.el9.clyso.noarc 37/152 2026-03-06T13:33:40.050 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : bzip2-1.0.8-11.el9.x86_64 38/152 2026-03-06T13:33:40.050 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 39/152 2026-03-06T13:33:40.050 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : fuse-2.9.9-17.el9.x86_64 40/152 2026-03-06T13:33:40.050 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 41/152 2026-03-06T13:33:40.050 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 42/152 2026-03-06T13:33:40.050 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 43/152 2026-03-06T13:33:40.050 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 44/152 2026-03-06T13:33:40.050 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 45/152 2026-03-06T13:33:40.050 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 46/152 2026-03-06T13:33:40.050 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 47/152 2026-03-06T13:33:40.050 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 48/152 2026-03-06T13:33:40.050 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-ply-3.11-14.el9.noarch 49/152 2026-03-06T13:33:40.050 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 50/152 2026-03-06T13:33:40.050 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 51/152 2026-03-06T13:33:40.050 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 52/152 2026-03-06T13:33:40.050 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : smartmontools-1:7.2-10.el9.x86_64 53/152 2026-03-06T13:33:40.050 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : unzip-6.0-59.el9.x86_64 54/152 2026-03-06T13:33:40.050 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : zip-3.0-35.el9.x86_64 55/152 2026-03-06T13:33:40.051 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 56/152 2026-03-06T13:33:40.051 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 57/152 2026-03-06T13:33:40.051 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 58/152 2026-03-06T13:33:40.051 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 59/152 2026-03-06T13:33:40.051 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 60/152 2026-03-06T13:33:40.051 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 61/152 2026-03-06T13:33:40.051 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 62/152 2026-03-06T13:33:40.051 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 63/152 2026-03-06T13:33:40.051 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 64/152 2026-03-06T13:33:40.051 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 65/152 2026-03-06T13:33:40.051 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 66/152 2026-03-06T13:33:40.051 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : lua-5.4.4-4.el9.x86_64 67/152 2026-03-06T13:33:40.051 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 68/152 2026-03-06T13:33:40.051 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 69/152 2026-03-06T13:33:40.051 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : perl-Benchmark-1.23-483.el9.noarch 70/152 2026-03-06T13:33:40.051 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : perl-Test-Harness-1:3.42-461.el9.noarch 71/152 2026-03-06T13:33:40.051 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 72/152 2026-03-06T13:33:40.051 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 73/152 2026-03-06T13:33:40.051 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 74/152 2026-03-06T13:33:40.052 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 75/152 2026-03-06T13:33:40.052 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 76/152 2026-03-06T13:33:40.052 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-lxml-4.6.5-3.el9.x86_64 77/152 2026-03-06T13:33:40.052 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 78/152 2026-03-06T13:33:40.052 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 79/152 2026-03-06T13:33:40.052 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 80/152 2026-03-06T13:33:40.052 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 81/152 2026-03-06T13:33:40.052 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 82/152 2026-03-06T13:33:40.052 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 83/152 2026-03-06T13:33:40.052 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 84/152 2026-03-06T13:33:40.052 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 85/152 2026-03-06T13:33:40.052 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 86/152 2026-03-06T13:33:40.052 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 87/152 2026-03-06T13:33:40.052 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 88/152 2026-03-06T13:33:40.052 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 89/152 2026-03-06T13:33:40.052 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 90/152 2026-03-06T13:33:40.052 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 91/152 2026-03-06T13:33:40.052 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 92/152 2026-03-06T13:33:40.052 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : xmlsec1-1.2.29-13.el9.x86_64 93/152 2026-03-06T13:33:40.052 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : xmlsec1-openssl-1.2.29-13.el9.x86_64 94/152 2026-03-06T13:33:40.052 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 95/152 2026-03-06T13:33:40.052 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 96/152 2026-03-06T13:33:40.052 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 97/152 2026-03-06T13:33:40.053 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 98/152 2026-03-06T13:33:40.053 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 99/152 2026-03-06T13:33:40.053 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 100/152 2026-03-06T13:33:40.053 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 101/152 2026-03-06T13:33:40.053 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 102/152 2026-03-06T13:33:40.053 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 103/152 2026-03-06T13:33:40.053 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 104/152 2026-03-06T13:33:40.053 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 105/152 2026-03-06T13:33:40.053 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 106/152 2026-03-06T13:33:40.053 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 107/152 2026-03-06T13:33:40.053 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 108/152 2026-03-06T13:33:40.053 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 109/152 2026-03-06T13:33:40.053 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 110/152 2026-03-06T13:33:40.053 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 111/152 2026-03-06T13:33:40.053 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 112/152 2026-03-06T13:33:40.053 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 113/152 2026-03-06T13:33:40.053 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 114/152 2026-03-06T13:33:40.053 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 115/152 2026-03-06T13:33:40.053 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 116/152 2026-03-06T13:33:40.053 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 117/152 2026-03-06T13:33:40.053 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-influxdb-5.3.1-1.el9.noarch 118/152 2026-03-06T13:33:40.053 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-isodate-0.6.1-3.el9.noarch 119/152 2026-03-06T13:33:40.053 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 120/152 2026-03-06T13:33:40.053 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 121/152 2026-03-06T13:33:40.053 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 122/152 2026-03-06T13:33:40.053 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 123/152 2026-03-06T13:33:40.054 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 124/152 2026-03-06T13:33:40.054 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 125/152 2026-03-06T13:33:40.054 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 126/152 2026-03-06T13:33:40.054 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 127/152 2026-03-06T13:33:40.054 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 128/152 2026-03-06T13:33:40.054 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-msgpack-1.0.3-2.el9.x86_64 129/152 2026-03-06T13:33:40.054 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 130/152 2026-03-06T13:33:40.054 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 131/152 2026-03-06T13:33:40.054 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 132/152 2026-03-06T13:33:40.054 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 133/152 2026-03-06T13:33:40.054 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 134/152 2026-03-06T13:33:40.054 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 135/152 2026-03-06T13:33:40.054 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 136/152 2026-03-06T13:33:40.054 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-saml-1.16.0-1.el9.noarch 137/152 2026-03-06T13:33:40.054 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 138/152 2026-03-06T13:33:40.054 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 139/152 2026-03-06T13:33:40.054 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 140/152 2026-03-06T13:33:40.054 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 141/152 2026-03-06T13:33:40.054 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 142/152 2026-03-06T13:33:40.054 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-xmlsec-1.3.13-1.el9.x86_64 143/152 2026-03-06T13:33:40.054 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-xmltodict-0.12.0-15.el9.noarch 144/152 2026-03-06T13:33:40.054 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 145/152 2026-03-06T13:33:40.054 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : re2-1:20211101-20.el9.x86_64 146/152 2026-03-06T13:33:40.054 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : s3cmd-2.4.0-1.el9.noarch 147/152 2026-03-06T13:33:40.054 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 148/152 2026-03-06T13:33:40.054 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librados2-2:19.2.3-47.gc24117fd552.el9.clyso.x86 149/152 2026-03-06T13:33:40.054 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librados2-2:16.2.4-5.el9.x86_64 150/152 2026-03-06T13:33:40.054 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librbd1-2:19.2.3-47.gc24117fd552.el9.clyso.x86_6 151/152 2026-03-06T13:33:40.159 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librbd1-2:16.2.4-5.el9.x86_64 152/152 2026-03-06T13:33:40.159 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:33:40.159 INFO:teuthology.orchestra.run.vm03.stdout:Upgraded: 2026-03-06T13:33:40.159 INFO:teuthology.orchestra.run.vm03.stdout: librados2-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:33:40.159 INFO:teuthology.orchestra.run.vm03.stdout: librbd1-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:33:40.159 INFO:teuthology.orchestra.run.vm03.stdout:Installed: 2026-03-06T13:33:40.159 INFO:teuthology.orchestra.run.vm03.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-06T13:33:40.159 INFO:teuthology.orchestra.run.vm03.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-06T13:33:40.159 INFO:teuthology.orchestra.run.vm03.stdout: bzip2-1.0.8-11.el9.x86_64 2026-03-06T13:33:40.159 INFO:teuthology.orchestra.run.vm03.stdout: ceph-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:33:40.159 INFO:teuthology.orchestra.run.vm03.stdout: ceph-base-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:33:40.159 INFO:teuthology.orchestra.run.vm03.stdout: ceph-common-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:33:40.159 INFO:teuthology.orchestra.run.vm03.stdout: ceph-fuse-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:33:40.159 INFO:teuthology.orchestra.run.vm03.stdout: ceph-grafana-dashboards-2:19.2.3-47.gc24117fd552.el9.clyso.noarch 2026-03-06T13:33:40.159 INFO:teuthology.orchestra.run.vm03.stdout: ceph-immutable-object-cache-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:33:40.159 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mds-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:33:40.159 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:33:40.159 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-cephadm-2:19.2.3-47.gc24117fd552.el9.clyso.noarch 2026-03-06T13:33:40.159 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-dashboard-2:19.2.3-47.gc24117fd552.el9.clyso.noarch 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-diskprediction-local-2:19.2.3-47.gc24117fd552.el9.clyso.noarch 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-k8sevents-2:19.2.3-47.gc24117fd552.el9.clyso.noarch 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core-2:19.2.3-47.gc24117fd552.el9.clyso.noarch 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-rook-2:19.2.3-47.gc24117fd552.el9.clyso.noarch 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mon-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: ceph-osd-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: ceph-prometheus-alerts-2:19.2.3-47.gc24117fd552.el9.clyso.noarch 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: ceph-radosgw-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: ceph-selinux-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: ceph-test-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: ceph-volume-2:19.2.3-47.gc24117fd552.el9.clyso.noarch 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: cephadm-2:19.2.3-47.gc24117fd552.el9.clyso.noarch 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: fuse-2.9.9-17.el9.x86_64 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs-devel-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs2-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: libcephsqlite-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: librados-devel-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: librgw2-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: libxslt-1.1.34-12.el9.x86_64 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: lua-5.4.4-4.el9.x86_64 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: mailcap-2.1.49-5.el9.noarch 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: perl-Benchmark-1.23-483.el9.noarch 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: perl-Test-Harness-1:3.42-461.el9.noarch 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-argparse-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-common-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:33:40.160 INFO:teuthology.orchestra.run.vm03.stdout: python3-cephfs-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-influxdb-5.3.1-1.el9.noarch 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-isodate-0.6.1-3.el9.noarch 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-lxml-4.6.5-3.el9.x86_64 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-msgpack-1.0.3-2.el9.x86_64 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-ply-3.11-14.el9.noarch 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-rados-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-rbd-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-rgw-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-saml-1.16.0-1.el9.noarch 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-xmlsec-1.3.13-1.el9.x86_64 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-xmltodict-0.12.0-15.el9.noarch 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-06T13:33:40.161 INFO:teuthology.orchestra.run.vm03.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-06T13:33:40.162 INFO:teuthology.orchestra.run.vm03.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-06T13:33:40.162 INFO:teuthology.orchestra.run.vm03.stdout: rbd-fuse-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:33:40.162 INFO:teuthology.orchestra.run.vm03.stdout: rbd-mirror-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:33:40.162 INFO:teuthology.orchestra.run.vm03.stdout: rbd-nbd-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:33:40.162 INFO:teuthology.orchestra.run.vm03.stdout: re2-1:20211101-20.el9.x86_64 2026-03-06T13:33:40.162 INFO:teuthology.orchestra.run.vm03.stdout: s3cmd-2.4.0-1.el9.noarch 2026-03-06T13:33:40.162 INFO:teuthology.orchestra.run.vm03.stdout: smartmontools-1:7.2-10.el9.x86_64 2026-03-06T13:33:40.162 INFO:teuthology.orchestra.run.vm03.stdout: socat-1.7.4.1-8.el9.x86_64 2026-03-06T13:33:40.162 INFO:teuthology.orchestra.run.vm03.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-06T13:33:40.162 INFO:teuthology.orchestra.run.vm03.stdout: unzip-6.0-59.el9.x86_64 2026-03-06T13:33:40.162 INFO:teuthology.orchestra.run.vm03.stdout: xmlsec1-1.2.29-13.el9.x86_64 2026-03-06T13:33:40.162 INFO:teuthology.orchestra.run.vm03.stdout: xmlsec1-openssl-1.2.29-13.el9.x86_64 2026-03-06T13:33:40.162 INFO:teuthology.orchestra.run.vm03.stdout: xmlstarlet-1.6.1-20.el9.x86_64 2026-03-06T13:33:40.162 INFO:teuthology.orchestra.run.vm03.stdout: zip-3.0-35.el9.x86_64 2026-03-06T13:33:40.162 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:33:40.162 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-06T13:33:40.243 DEBUG:teuthology.parallel:result is None 2026-03-06T13:33:40.243 INFO:teuthology.task.install:Skipping version verification because we have custom repos... 2026-03-06T13:33:40.243 INFO:teuthology.task.install.util:Shipping valgrind.supp... 2026-03-06T13:33:40.244 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-06T13:33:40.244 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-06T13:33:40.272 INFO:teuthology.task.install.util:Shipping 'daemon-helper'... 2026-03-06T13:33:40.272 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-06T13:33:40.272 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/usr/bin/daemon-helper 2026-03-06T13:33:40.340 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-06T13:33:40.403 INFO:teuthology.task.install.util:Shipping 'adjust-ulimits'... 2026-03-06T13:33:40.403 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-06T13:33:40.403 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-06T13:33:40.467 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-06T13:33:40.531 INFO:teuthology.task.install.util:Shipping 'stdin-killer'... 2026-03-06T13:33:40.531 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-06T13:33:40.531 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/usr/bin/stdin-killer 2026-03-06T13:33:40.594 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-06T13:33:40.658 INFO:teuthology.run_tasks:Running task cephadm... 2026-03-06T13:33:40.704 INFO:tasks.cephadm:Config: {'conf': {'global': {'mon election default strategy': 1}, 'mgr': {'debug mgr': 20, 'debug ms': 1, 'mgr/cephadm/use_agent': False}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20}, 'osd': {'debug ms': 1, 'debug osd': 20, 'osd mclock iops capacity threshold hdd': 49000}}, 'flavor': 'default', 'log-ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', 'CEPHADM_FAILED_DAEMON'], 'log-only-match': ['CEPHADM_'], 'sha1': 'c24117fd5525679b799527bc1bd1f1dd0a2db5e2', 'cephadm_binary_url': 'https://download.ceph.com/rpm-19.2.3/el9/noarch/cephadm', 'containers': {'image': 'harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3'}} 2026-03-06T13:33:40.704 INFO:tasks.cephadm:Provided image contains tag or digest, using it as is 2026-03-06T13:33:40.704 INFO:tasks.cephadm:Cluster image is harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3 2026-03-06T13:33:40.704 INFO:tasks.cephadm:Cluster fsid is b4d7b36a-1958-11f1-a2a1-8fd8798eb057 2026-03-06T13:33:40.704 INFO:tasks.cephadm:Choosing monitor IPs and ports... 2026-03-06T13:33:40.704 INFO:tasks.cephadm:Monitor IPs: {'mon.a': '192.168.123.103'} 2026-03-06T13:33:40.704 INFO:tasks.cephadm:First mon is mon.a on vm03 2026-03-06T13:33:40.704 INFO:tasks.cephadm:First mgr is a 2026-03-06T13:33:40.704 INFO:tasks.cephadm:Normalizing hostnames... 2026-03-06T13:33:40.704 DEBUG:teuthology.orchestra.run.vm03:> sudo hostname $(hostname -s) 2026-03-06T13:33:40.728 INFO:tasks.cephadm:Downloading cephadm from url: https://download.ceph.com/rpm-19.2.3/el9/noarch/cephadm 2026-03-06T13:33:40.728 DEBUG:teuthology.orchestra.run.vm03:> curl --silent -L https://download.ceph.com/rpm-19.2.3/el9/noarch/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-06T13:33:41.856 INFO:teuthology.orchestra.run.vm03.stdout:-rw-r--r--. 1 ubuntu ubuntu 787672 Mar 6 13:33 /home/ubuntu/cephtest/cephadm 2026-03-06T13:33:41.856 DEBUG:teuthology.orchestra.run.vm03:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-06T13:33:41.874 INFO:tasks.cephadm:Pulling image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3 on all hosts... 2026-03-06T13:33:41.874 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3 pull 2026-03-06T13:33:42.214 INFO:teuthology.orchestra.run.vm03.stderr:Pulling container image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3... 2026-03-06T13:33:58.519 INFO:teuthology.orchestra.run.vm03.stdout:{ 2026-03-06T13:33:58.519 INFO:teuthology.orchestra.run.vm03.stdout: "ceph_version": "ceph version 19.2.3-47-gc24117fd552 (c24117fd5525679b799527bc1bd1f1dd0a2db5e2) squid (stable)", 2026-03-06T13:33:58.519 INFO:teuthology.orchestra.run.vm03.stdout: "image_id": "306e97de47e91c2b4b24d3dc09be3b3a12039b078f343d91220102acc6628a68", 2026-03-06T13:33:58.519 INFO:teuthology.orchestra.run.vm03.stdout: "repo_digests": [ 2026-03-06T13:33:58.519 INFO:teuthology.orchestra.run.vm03.stdout: "harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b" 2026-03-06T13:33:58.519 INFO:teuthology.orchestra.run.vm03.stdout: ] 2026-03-06T13:33:58.519 INFO:teuthology.orchestra.run.vm03.stdout:} 2026-03-06T13:33:58.538 DEBUG:teuthology.orchestra.run.vm03:> sudo mkdir -p /etc/ceph 2026-03-06T13:33:58.562 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod 777 /etc/ceph 2026-03-06T13:33:58.626 INFO:tasks.cephadm:Writing seed config... 2026-03-06T13:33:58.626 INFO:tasks.cephadm: override: [global] mon election default strategy = 1 2026-03-06T13:33:58.626 INFO:tasks.cephadm: override: [mgr] debug mgr = 20 2026-03-06T13:33:58.626 INFO:tasks.cephadm: override: [mgr] debug ms = 1 2026-03-06T13:33:58.626 INFO:tasks.cephadm: override: [mgr] mgr/cephadm/use_agent = False 2026-03-06T13:33:58.626 INFO:tasks.cephadm: override: [mon] debug mon = 20 2026-03-06T13:33:58.626 INFO:tasks.cephadm: override: [mon] debug ms = 1 2026-03-06T13:33:58.626 INFO:tasks.cephadm: override: [mon] debug paxos = 20 2026-03-06T13:33:58.626 INFO:tasks.cephadm: override: [osd] debug ms = 1 2026-03-06T13:33:58.626 INFO:tasks.cephadm: override: [osd] debug osd = 20 2026-03-06T13:33:58.626 INFO:tasks.cephadm: override: [osd] osd mclock iops capacity threshold hdd = 49000 2026-03-06T13:33:58.627 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-06T13:33:58.627 DEBUG:teuthology.orchestra.run.vm03:> dd of=/home/ubuntu/cephtest/seed.ceph.conf 2026-03-06T13:33:58.682 DEBUG:tasks.cephadm:Final config: [global] # make logging friendly to teuthology log_to_file = true log_to_stderr = false log to journald = false mon cluster log to file = true mon cluster log file level = debug mon clock drift allowed = 1.000 # replicate across OSDs, not hosts osd crush chooseleaf type = 0 #osd pool default size = 2 osd pool default erasure code profile = plugin=jerasure technique=reed_sol_van k=2 m=1 crush-failure-domain=osd # enable some debugging auth debug = true ms die on old message = true ms die on bug = true debug asserts on shutdown = true # adjust warnings mon max pg per osd = 10000# >= luminous mon pg warn max object skew = 0 mon osd allow primary affinity = true mon osd allow pg remap = true mon warn on legacy crush tunables = false mon warn on crush straw calc version zero = false mon warn on no sortbitwise = false mon warn on osd down out interval zero = false mon warn on too few osds = false mon_warn_on_pool_pg_num_not_power_of_two = false # disable pg_autoscaler by default for new pools osd_pool_default_pg_autoscale_mode = off # tests delete pools mon allow pool delete = true fsid = b4d7b36a-1958-11f1-a2a1-8fd8798eb057 mon election default strategy = 1 [osd] osd scrub load threshold = 5.0 osd scrub max interval = 600 osd mclock profile = high_recovery_ops osd recover clone overlap = true osd recovery max chunk = 1048576 osd deep scrub update digest min age = 30 osd map max advance = 10 osd memory target autotune = true # debugging osd debug shutdown = true osd debug op order = true osd debug verify stray on activate = true osd debug pg log writeout = true osd debug verify cached snaps = true osd debug verify missing on start = true osd debug misdirected ops = true osd op queue = debug_random osd op queue cut off = debug_random osd shutdown pgref assert = true bdev debug aio = true osd sloppy crc = true debug ms = 1 debug osd = 20 osd mclock iops capacity threshold hdd = 49000 [mgr] mon reweight min pgs per osd = 4 mon reweight min bytes per osd = 10 mgr/telemetry/nag = false debug mgr = 20 debug ms = 1 mgr/cephadm/use_agent = False [mon] mon data avail warn = 5 mon mgr mkfs grace = 240 mon reweight min pgs per osd = 4 mon osd reporter subtree level = osd mon osd prime pg temp = true mon reweight min bytes per osd = 10 # rotate auth tickets quickly to exercise renewal paths auth mon ticket ttl = 660# 11m auth service ticket ttl = 240# 4m # don't complain about global id reclaim mon_warn_on_insecure_global_id_reclaim = false mon_warn_on_insecure_global_id_reclaim_allowed = false debug mon = 20 debug ms = 1 debug paxos = 20 [client.rgw] rgw cache enabled = true rgw enable ops log = true rgw enable usage log = true 2026-03-06T13:33:58.682 DEBUG:teuthology.orchestra.run.vm03:mon.a> sudo journalctl -f -n 0 -u ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@mon.a.service 2026-03-06T13:33:58.723 DEBUG:teuthology.orchestra.run.vm03:mgr.a> sudo journalctl -f -n 0 -u ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@mgr.a.service 2026-03-06T13:33:58.766 INFO:tasks.cephadm:Bootstrapping... 2026-03-06T13:33:58.766 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3 -v bootstrap --fsid b4d7b36a-1958-11f1-a2a1-8fd8798eb057 --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id a --orphan-initial-daemons --skip-monitoring-stack --mon-ip 192.168.123.103 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring 2026-03-06T13:33:59.060 INFO:teuthology.orchestra.run.vm03.stdout:-------------------------------------------------------------------------------- 2026-03-06T13:33:59.060 INFO:teuthology.orchestra.run.vm03.stdout:cephadm ['--image', 'harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3', '-v', 'bootstrap', '--fsid', 'b4d7b36a-1958-11f1-a2a1-8fd8798eb057', '--config', '/home/ubuntu/cephtest/seed.ceph.conf', '--output-config', '/etc/ceph/ceph.conf', '--output-keyring', '/etc/ceph/ceph.client.admin.keyring', '--output-pub-ssh-key', '/home/ubuntu/cephtest/ceph.pub', '--mon-id', 'a', '--mgr-id', 'a', '--orphan-initial-daemons', '--skip-monitoring-stack', '--mon-ip', '192.168.123.103', '--skip-admin-label'] 2026-03-06T13:33:59.061 INFO:teuthology.orchestra.run.vm03.stderr:Specifying an fsid for your cluster offers no advantages and may increase the likelihood of fsid conflicts. 2026-03-06T13:33:59.061 INFO:teuthology.orchestra.run.vm03.stdout:Verifying podman|docker is present... 2026-03-06T13:33:59.079 INFO:teuthology.orchestra.run.vm03.stdout:/bin/podman: stdout 5.8.0 2026-03-06T13:33:59.080 INFO:teuthology.orchestra.run.vm03.stdout:Verifying lvm2 is present... 2026-03-06T13:33:59.080 INFO:teuthology.orchestra.run.vm03.stdout:Verifying time synchronization is in place... 2026-03-06T13:33:59.087 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-06T13:33:59.087 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-06T13:33:59.092 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-06T13:33:59.092 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout inactive 2026-03-06T13:33:59.098 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout enabled 2026-03-06T13:33:59.102 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout active 2026-03-06T13:33:59.102 INFO:teuthology.orchestra.run.vm03.stdout:Unit chronyd.service is enabled and running 2026-03-06T13:33:59.102 INFO:teuthology.orchestra.run.vm03.stdout:Repeating the final host check... 2026-03-06T13:33:59.122 INFO:teuthology.orchestra.run.vm03.stdout:/bin/podman: stdout 5.8.0 2026-03-06T13:33:59.122 INFO:teuthology.orchestra.run.vm03.stdout:podman (/bin/podman) version 5.8.0 is present 2026-03-06T13:33:59.122 INFO:teuthology.orchestra.run.vm03.stdout:systemctl is present 2026-03-06T13:33:59.122 INFO:teuthology.orchestra.run.vm03.stdout:lvcreate is present 2026-03-06T13:33:59.127 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-06T13:33:59.127 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-06T13:33:59.133 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-06T13:33:59.133 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout inactive 2026-03-06T13:33:59.139 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout enabled 2026-03-06T13:33:59.144 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout active 2026-03-06T13:33:59.144 INFO:teuthology.orchestra.run.vm03.stdout:Unit chronyd.service is enabled and running 2026-03-06T13:33:59.144 INFO:teuthology.orchestra.run.vm03.stdout:Host looks OK 2026-03-06T13:33:59.144 INFO:teuthology.orchestra.run.vm03.stdout:Cluster fsid: b4d7b36a-1958-11f1-a2a1-8fd8798eb057 2026-03-06T13:33:59.144 INFO:teuthology.orchestra.run.vm03.stdout:Acquiring lock 140092965185040 on /run/cephadm/b4d7b36a-1958-11f1-a2a1-8fd8798eb057.lock 2026-03-06T13:33:59.144 INFO:teuthology.orchestra.run.vm03.stdout:Lock 140092965185040 acquired on /run/cephadm/b4d7b36a-1958-11f1-a2a1-8fd8798eb057.lock 2026-03-06T13:33:59.145 INFO:teuthology.orchestra.run.vm03.stdout:Verifying IP 192.168.123.103 port 3300 ... 2026-03-06T13:33:59.145 INFO:teuthology.orchestra.run.vm03.stdout:Verifying IP 192.168.123.103 port 6789 ... 2026-03-06T13:33:59.145 INFO:teuthology.orchestra.run.vm03.stdout:Base mon IP(s) is [192.168.123.103:3300, 192.168.123.103:6789], mon addrv is [v2:192.168.123.103:3300,v1:192.168.123.103:6789] 2026-03-06T13:33:59.148 INFO:teuthology.orchestra.run.vm03.stdout:/sbin/ip: stdout default via 192.168.123.1 dev eth0 proto dhcp src 192.168.123.103 metric 100 2026-03-06T13:33:59.148 INFO:teuthology.orchestra.run.vm03.stdout:/sbin/ip: stdout 192.168.123.0/24 dev eth0 proto kernel scope link src 192.168.123.103 metric 100 2026-03-06T13:33:59.150 INFO:teuthology.orchestra.run.vm03.stdout:/sbin/ip: stdout ::1 dev lo proto kernel metric 256 pref medium 2026-03-06T13:33:59.150 INFO:teuthology.orchestra.run.vm03.stdout:/sbin/ip: stdout fe80::/64 dev eth0 proto kernel metric 1024 pref medium 2026-03-06T13:33:59.152 INFO:teuthology.orchestra.run.vm03.stdout:/sbin/ip: stdout 1: lo: mtu 65536 state UNKNOWN qlen 1000 2026-03-06T13:33:59.152 INFO:teuthology.orchestra.run.vm03.stdout:/sbin/ip: stdout inet6 ::1/128 scope host 2026-03-06T13:33:59.152 INFO:teuthology.orchestra.run.vm03.stdout:/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-06T13:33:59.152 INFO:teuthology.orchestra.run.vm03.stdout:/sbin/ip: stdout 2: eth0: mtu 1500 state UP qlen 1000 2026-03-06T13:33:59.153 INFO:teuthology.orchestra.run.vm03.stdout:/sbin/ip: stdout inet6 fe80::5055:ff:fe00:3/64 scope link noprefixroute 2026-03-06T13:33:59.153 INFO:teuthology.orchestra.run.vm03.stdout:/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-06T13:33:59.153 INFO:teuthology.orchestra.run.vm03.stdout:Mon IP `192.168.123.103` is in CIDR network `192.168.123.0/24` 2026-03-06T13:33:59.153 INFO:teuthology.orchestra.run.vm03.stdout:Mon IP `192.168.123.103` is in CIDR network `192.168.123.0/24` 2026-03-06T13:33:59.153 INFO:teuthology.orchestra.run.vm03.stdout:Inferred mon public CIDR from local network configuration ['192.168.123.0/24', '192.168.123.0/24'] 2026-03-06T13:33:59.154 INFO:teuthology.orchestra.run.vm03.stdout:Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network 2026-03-06T13:33:59.154 INFO:teuthology.orchestra.run.vm03.stdout:Pulling container image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3... 2026-03-06T13:33:59.837 INFO:teuthology.orchestra.run.vm03.stdout:/bin/podman: stdout 306e97de47e91c2b4b24d3dc09be3b3a12039b078f343d91220102acc6628a68 2026-03-06T13:33:59.837 INFO:teuthology.orchestra.run.vm03.stdout:/bin/podman: stderr Trying to pull harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3... 2026-03-06T13:33:59.837 INFO:teuthology.orchestra.run.vm03.stdout:/bin/podman: stderr Getting image source signatures 2026-03-06T13:33:59.837 INFO:teuthology.orchestra.run.vm03.stdout:/bin/podman: stderr Copying blob sha256:d21d4233fd3d4dd2f376e5ef084c47891c860682c1de15a9c0357cea5defbc91 2026-03-06T13:33:59.837 INFO:teuthology.orchestra.run.vm03.stdout:/bin/podman: stderr Copying config sha256:306e97de47e91c2b4b24d3dc09be3b3a12039b078f343d91220102acc6628a68 2026-03-06T13:33:59.837 INFO:teuthology.orchestra.run.vm03.stdout:/bin/podman: stderr Writing manifest to image destination 2026-03-06T13:34:00.608 INFO:teuthology.orchestra.run.vm03.stdout:ceph: stdout ceph version 19.2.3-47-gc24117fd552 (c24117fd5525679b799527bc1bd1f1dd0a2db5e2) squid (stable) 2026-03-06T13:34:00.608 INFO:teuthology.orchestra.run.vm03.stdout:Ceph version: ceph version 19.2.3-47-gc24117fd552 (c24117fd5525679b799527bc1bd1f1dd0a2db5e2) squid (stable) 2026-03-06T13:34:00.608 INFO:teuthology.orchestra.run.vm03.stdout:Extracting ceph user uid/gid from container image... 2026-03-06T13:34:00.836 INFO:teuthology.orchestra.run.vm03.stdout:stat: stdout 167 167 2026-03-06T13:34:00.836 INFO:teuthology.orchestra.run.vm03.stdout:Creating initial keys... 2026-03-06T13:34:01.045 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-authtool: stdout AQC4yappj4nxNhAA+dBKfZW7wCoRnq9A+EVyWw== 2026-03-06T13:34:01.261 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-authtool: stdout AQC5yappsckzCBAA7Hb+bgpe5achkmqfHNgHCw== 2026-03-06T13:34:01.501 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-authtool: stdout AQC5yappnzE2FRAAAWZC8fJs1ak7ivz907N9pg== 2026-03-06T13:34:01.502 INFO:teuthology.orchestra.run.vm03.stdout:Creating initial monmap... 2026-03-06T13:34:01.719 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-06T13:34:01.719 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/monmaptool: stdout setting min_mon_release = quincy 2026-03-06T13:34:01.719 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: set fsid to b4d7b36a-1958-11f1-a2a1-8fd8798eb057 2026-03-06T13:34:01.719 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-06T13:34:01.719 INFO:teuthology.orchestra.run.vm03.stdout:monmaptool for a [v2:192.168.123.103:3300,v1:192.168.123.103:6789] on /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-06T13:34:01.719 INFO:teuthology.orchestra.run.vm03.stdout:setting min_mon_release = quincy 2026-03-06T13:34:01.720 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/monmaptool: set fsid to b4d7b36a-1958-11f1-a2a1-8fd8798eb057 2026-03-06T13:34:01.720 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-06T13:34:01.720 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:34:01.720 INFO:teuthology.orchestra.run.vm03.stdout:Creating mon... 2026-03-06T13:34:01.988 INFO:teuthology.orchestra.run.vm03.stdout:create mon.a on 2026-03-06T13:34:02.155 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Removed "/etc/systemd/system/multi-user.target.wants/ceph.target". 2026-03-06T13:34:02.506 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /etc/systemd/system/ceph.target. 2026-03-06T13:34:02.636 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057.target → /etc/systemd/system/ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057.target. 2026-03-06T13:34:02.636 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph.target.wants/ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057.target → /etc/systemd/system/ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057.target. 2026-03-06T13:34:02.789 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@mon.a 2026-03-06T13:34:02.789 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Failed to reset failed state of unit ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@mon.a.service: Unit ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@mon.a.service not loaded. 2026-03-06T13:34:02.930 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057.target.wants/ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@mon.a.service → /etc/systemd/system/ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@.service. 2026-03-06T13:34:03.097 INFO:teuthology.orchestra.run.vm03.stdout:firewalld does not appear to be present 2026-03-06T13:34:03.097 INFO:teuthology.orchestra.run.vm03.stdout:Not possible to enable service . firewalld.service is not available 2026-03-06T13:34:03.097 INFO:teuthology.orchestra.run.vm03.stdout:Waiting for mon to start... 2026-03-06T13:34:03.097 INFO:teuthology.orchestra.run.vm03.stdout:Waiting for mon... 2026-03-06T13:34:03.526 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout cluster: 2026-03-06T13:34:03.527 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout id: b4d7b36a-1958-11f1-a2a1-8fd8798eb057 2026-03-06T13:34:03.527 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout health: HEALTH_OK 2026-03-06T13:34:03.527 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-06T13:34:03.527 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout services: 2026-03-06T13:34:03.527 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon: 1 daemons, quorum a (age 0.255455s) 2026-03-06T13:34:03.527 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mgr: no daemons active 2026-03-06T13:34:03.527 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout osd: 0 osds: 0 up, 0 in 2026-03-06T13:34:03.527 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-06T13:34:03.527 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout data: 2026-03-06T13:34:03.527 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout pools: 0 pools, 0 pgs 2026-03-06T13:34:03.527 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout objects: 0 objects, 0 B 2026-03-06T13:34:03.527 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout usage: 0 B used, 0 B / 0 B avail 2026-03-06T13:34:03.527 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout pgs: 2026-03-06T13:34:03.527 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-06T13:34:03.527 INFO:teuthology.orchestra.run.vm03.stdout:mon is available 2026-03-06T13:34:03.527 INFO:teuthology.orchestra.run.vm03.stdout:Assimilating anything we can from ceph.conf... 2026-03-06T13:34:03.929 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-06T13:34:03.929 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout [global] 2026-03-06T13:34:03.929 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout fsid = b4d7b36a-1958-11f1-a2a1-8fd8798eb057 2026-03-06T13:34:03.929 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-06T13:34:03.929 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.103:3300,v1:192.168.123.103:6789] 2026-03-06T13:34:03.929 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-06T13:34:03.929 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-06T13:34:03.929 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-06T13:34:03.929 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-06T13:34:03.929 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-06T13:34:03.929 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-06T13:34:03.929 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mgr/cephadm/use_agent = False 2026-03-06T13:34:03.929 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-06T13:34:03.929 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-06T13:34:03.929 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout [osd] 2026-03-06T13:34:03.929 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-06T13:34:03.929 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-06T13:34:03.929 INFO:teuthology.orchestra.run.vm03.stdout:Generating new minimal ceph.conf... 2026-03-06T13:34:04.317 INFO:teuthology.orchestra.run.vm03.stdout:Restarting the monitor... 2026-03-06T13:34:04.508 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mon-a[49999]: 2026-03-06T12:34:04.387+0000 7fa0cafac640 -1 mon.a@0(leader) e1 *** Got Signal Terminated *** 2026-03-06T13:34:04.791 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 podman[50299]: 2026-03-06 13:34:04.533750475 +0100 CET m=+0.159150266 container died 2ae7b94364d037154aeb017329ed0c79e18633754e1bc3ec95b23da8e4216cb0 (image=harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mon-a, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git) 2026-03-06T13:34:04.791 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 podman[50299]: 2026-03-06 13:34:04.649492583 +0100 CET m=+0.274892374 container remove 2ae7b94364d037154aeb017329ed0c79e18633754e1bc3ec95b23da8e4216cb0 (image=harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mon-a, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default) 2026-03-06T13:34:04.791 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 bash[50299]: ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mon-a 2026-03-06T13:34:04.791 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 systemd[1]: ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@mon.a.service: Deactivated successfully. 2026-03-06T13:34:04.791 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 systemd[1]: Stopped Ceph mon.a for b4d7b36a-1958-11f1-a2a1-8fd8798eb057. 2026-03-06T13:34:04.791 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 systemd[1]: Starting Ceph mon.a for b4d7b36a-1958-11f1-a2a1-8fd8798eb057... 2026-03-06T13:34:04.832 INFO:teuthology.orchestra.run.vm03.stdout:Setting public_network to 192.168.123.0/24 in mon config section 2026-03-06T13:34:05.101 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 podman[50377]: 2026-03-06 13:34:04.790402828 +0100 CET m=+0.014065919 container create 26481bcb51760faa6ca25a888a26d73dadb44a1f68997d41ab5521c2764f908a (image=harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mon-a, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2) 2026-03-06T13:34:05.101 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 podman[50377]: 2026-03-06 13:34:04.821679179 +0100 CET m=+0.045342270 container init 26481bcb51760faa6ca25a888a26d73dadb44a1f68997d41ab5521c2764f908a (image=harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mon-a, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552) 2026-03-06T13:34:05.101 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 podman[50377]: 2026-03-06 13:34:04.825155736 +0100 CET m=+0.048818827 container start 26481bcb51760faa6ca25a888a26d73dadb44a1f68997d41ab5521c2764f908a (image=harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mon-a, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2) 2026-03-06T13:34:05.101 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 bash[50377]: 26481bcb51760faa6ca25a888a26d73dadb44a1f68997d41ab5521c2764f908a 2026-03-06T13:34:05.101 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 podman[50377]: 2026-03-06 13:34:04.784758313 +0100 CET m=+0.008421404 image pull 306e97de47e91c2b4b24d3dc09be3b3a12039b078f343d91220102acc6628a68 harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3 2026-03-06T13:34:05.101 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 systemd[1]: Started Ceph mon.a for b4d7b36a-1958-11f1-a2a1-8fd8798eb057. 2026-03-06T13:34:05.101 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: set uid:gid to 167:167 (ceph:ceph) 2026-03-06T13:34:05.101 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: ceph version 19.2.3-47-gc24117fd552 (c24117fd5525679b799527bc1bd1f1dd0a2db5e2) squid (stable), process ceph-mon, pid 7 2026-03-06T13:34:05.101 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: pidfile_write: ignore empty --pid-file 2026-03-06T13:34:05.101 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: load: jerasure load: lrc 2026-03-06T13:34:05.101 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: RocksDB version: 7.9.2 2026-03-06T13:34:05.101 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Git sha 0 2026-03-06T13:34:05.101 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Compile date 2026-03-03 21:08:28 2026-03-06T13:34:05.101 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: DB SUMMARY 2026-03-06T13:34:05.101 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: DB Session ID: I6V9GRVGDGLC18UHRBN2 2026-03-06T13:34:05.101 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: CURRENT file: CURRENT 2026-03-06T13:34:05.101 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: IDENTITY file: IDENTITY 2026-03-06T13:34:05.101 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: MANIFEST file: MANIFEST-000010 size: 179 Bytes 2026-03-06T13:34:05.101 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 1, files: 000008.sst 2026-03-06T13:34:05.101 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 000009.log size: 87081 ; 2026-03-06T13:34:05.101 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.error_if_exists: 0 2026-03-06T13:34:05.101 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.create_if_missing: 0 2026-03-06T13:34:05.101 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.paranoid_checks: 1 2026-03-06T13:34:05.101 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.flush_verify_memtable_count: 1 2026-03-06T13:34:05.101 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-06T13:34:05.101 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-06T13:34:05.101 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.env: 0x557492d28ca0 2026-03-06T13:34:05.101 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.fs: PosixFileSystem 2026-03-06T13:34:05.101 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.info_log: 0x557493d77700 2026-03-06T13:34:05.101 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.max_file_opening_threads: 16 2026-03-06T13:34:05.101 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.statistics: (nil) 2026-03-06T13:34:05.101 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.use_fsync: 0 2026-03-06T13:34:05.101 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.max_log_file_size: 0 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.log_file_time_to_roll: 0 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.keep_log_file_num: 1000 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.recycle_log_file_num: 0 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.allow_fallocate: 1 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.allow_mmap_reads: 0 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.allow_mmap_writes: 0 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.use_direct_reads: 0 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.create_missing_column_families: 0 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.db_log_dir: 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.wal_dir: 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.table_cache_numshardbits: 6 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.WAL_ttl_seconds: 0 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.WAL_size_limit_MB: 0 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.is_fd_close_on_exec: 1 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.advise_random_on_open: 1 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.db_write_buffer_size: 0 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.write_buffer_manager: 0x557493d7b900 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.use_adaptive_mutex: 0 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.rate_limiter: (nil) 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.wal_recovery_mode: 2 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.enable_thread_tracking: 0 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.enable_pipelined_write: 0 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.unordered_write: 0 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.row_cache: None 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.wal_filter: None 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.allow_ingest_behind: 0 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.two_write_queues: 0 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.manual_wal_flush: 0 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.wal_compression: 0 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.atomic_flush: 0 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.persist_stats_to_disk: 0 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.write_dbid_to_manifest: 0 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.log_readahead_size: 0 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.best_efforts_recovery: 0 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.allow_data_in_errors: 0 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.db_host_id: __hostname__ 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.enforce_single_del_contracts: true 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.max_background_jobs: 2 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.max_background_compactions: -1 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.max_subcompactions: 1 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.delayed_write_rate : 16777216 2026-03-06T13:34:05.102 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.max_total_wal_size: 0 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.stats_dump_period_sec: 600 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.stats_persist_period_sec: 600 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.max_open_files: -1 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.bytes_per_sync: 0 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.wal_bytes_per_sync: 0 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.strict_bytes_per_sync: 0 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.compaction_readahead_size: 0 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.max_background_flushes: -1 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Compression algorithms supported: 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: kZSTD supported: 0 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: kXpressCompression supported: 0 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: kBZip2Compression supported: 0 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: kLZ4Compression supported: 1 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: kZlibCompression supported: 1 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: kLZ4HCCompression supported: 1 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: kSnappyCompression supported: 1 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Fast CRC32 supported: Supported on x86 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: DMutex implementation: pthread_mutex_t 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.merge_operator: 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.compaction_filter: None 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.compaction_filter_factory: None 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.sst_partitioner_factory: None 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.memtable_factory: SkipListFactory 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.table_factory: BlockBasedTable 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557493d77320) 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout: cache_index_and_filter_blocks: 1 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout: pin_top_level_index_and_filter: 1 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout: index_type: 0 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout: data_block_index_type: 0 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout: index_shortening: 1 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout: data_block_hash_table_util_ratio: 0.750000 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout: checksum: 4 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout: no_block_cache: 0 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout: block_cache: 0x557493d9b350 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout: block_cache_name: BinnedLRUCache 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout: block_cache_options: 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout: capacity : 536870912 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout: num_shard_bits : 4 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout: strict_capacity_limit : 0 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout: high_pri_pool_ratio: 0.000 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout: block_cache_compressed: (nil) 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout: persistent_cache: (nil) 2026-03-06T13:34:05.103 INFO:journalctl@ceph.mon.a.vm03.stdout: block_size: 4096 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout: block_size_deviation: 10 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout: block_restart_interval: 16 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout: index_block_restart_interval: 1 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout: metadata_block_size: 4096 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout: partition_filters: 0 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout: use_delta_encoding: 1 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout: filter_policy: bloomfilter 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout: whole_key_filtering: 1 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout: verify_compression: 0 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout: read_amp_bytes_per_bit: 0 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout: format_version: 5 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout: enable_index_compression: 1 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout: block_align: 0 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout: max_auto_readahead_size: 262144 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout: prepopulate_block_cache: 0 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout: initial_auto_readahead_size: 8192 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout: num_file_reads_for_auto_readahead: 2 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.write_buffer_size: 33554432 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.max_write_buffer_number: 2 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.compression: NoCompression 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.bottommost_compression: Disabled 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.prefix_extractor: nullptr 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.num_levels: 7 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.compression_opts.window_bits: -14 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.compression_opts.level: 32767 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.compression_opts.strategy: 0 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.compression_opts.enabled: false 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.target_file_size_base: 67108864 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.target_file_size_multiplier: 1 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.arena_block_size: 1048576 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.disable_auto_compactions: 0 2026-03-06T13:34:05.104 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.inplace_update_support: 0 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.inplace_update_num_locks: 10000 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.memtable_huge_page_size: 0 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.bloom_locality: 0 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.max_successive_merges: 0 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.optimize_filters_for_hits: 0 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.paranoid_file_checks: 0 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.force_consistency_checks: 1 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.report_bg_io_stats: 0 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.ttl: 2592000 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.periodic_compaction_seconds: 0 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.enable_blob_files: false 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.min_blob_size: 0 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.blob_file_size: 268435456 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.blob_compression_type: NoCompression 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.enable_blob_garbage_collection: false 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.blob_file_starting_level: 0 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 3a4bcfb2-ed2e-4005-9cd4-9c95bc884bc7 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772800444853770, "job": 1, "event": "recovery_started", "wal_files": [9]} 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772800444856018, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 84042, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 245, "table_properties": {"data_size": 82208, "index_size": 223, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 581, "raw_key_size": 10134, "raw_average_key_size": 47, "raw_value_size": 76403, "raw_average_value_size": 360, "num_data_blocks": 10, "num_entries": 212, "num_filter_entries": 212, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772800444, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3a4bcfb2-ed2e-4005-9cd4-9c95bc884bc7", "db_session_id": "I6V9GRVGDGLC18UHRBN2", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}} 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772800444856187, "job": 1, "event": "recovery_finished"} 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: [db/version_set.cc:5047] Creating manifest 15 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-a/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x557493d9ce00 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: rocksdb: DB pointer 0x557493db0000 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: starting mon.a rank 0 at public addrs [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] at bind addrs [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon_data /var/lib/ceph/mon/ceph-a fsid b4d7b36a-1958-11f1-a2a1-8fd8798eb057 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: mon.a@-1(???) e1 preinit fsid b4d7b36a-1958-11f1-a2a1-8fd8798eb057 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: mon.a@-1(???).mds e1 new map 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: mon.a@-1(???).mds e1 print_map 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout: e1 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout: btime 2026-03-06T12:34:03:119101+0000 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout: enable_multiple, ever_enabled_multiple: 1,1 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout: legacy client fscid: -1 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout: 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout: No filesystems configured 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: mon.a@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: mon.a@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: mon.a@-1(???).mgr e0 loading version 1 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: mon.a@-1(???).mgr e1 active server: (0) 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: mon.a@-1(???).mgr e1 mkfs or daemon transitioned to available, loading commands 2026-03-06T13:34:05.105 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: mon.a is new leader, mons a in quorum (ranks 0) 2026-03-06T13:34:05.106 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: monmap epoch 1 2026-03-06T13:34:05.106 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: fsid b4d7b36a-1958-11f1-a2a1-8fd8798eb057 2026-03-06T13:34:05.106 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: last_changed 2026-03-06T12:34:01.596754+0000 2026-03-06T13:34:05.106 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: created 2026-03-06T12:34:01.596754+0000 2026-03-06T13:34:05.106 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: min_mon_release 19 (squid) 2026-03-06T13:34:05.106 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: election_strategy: 1 2026-03-06T13:34:05.106 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-06T13:34:05.106 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: fsmap 2026-03-06T13:34:05.106 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: osdmap e1: 0 total, 0 up, 0 in 2026-03-06T13:34:05.106 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:04 vm03 ceph-mon[50411]: mgrmap e1: no daemons active 2026-03-06T13:34:05.235 INFO:teuthology.orchestra.run.vm03.stdout:Wrote config to /etc/ceph/ceph.conf 2026-03-06T13:34:05.236 INFO:teuthology.orchestra.run.vm03.stdout:Wrote keyring to /etc/ceph/ceph.client.admin.keyring 2026-03-06T13:34:05.236 INFO:teuthology.orchestra.run.vm03.stdout:Creating mgr... 2026-03-06T13:34:05.237 INFO:teuthology.orchestra.run.vm03.stdout:Verifying port 0.0.0.0:9283 ... 2026-03-06T13:34:05.237 INFO:teuthology.orchestra.run.vm03.stdout:Verifying port 0.0.0.0:8765 ... 2026-03-06T13:34:05.379 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@mgr.a 2026-03-06T13:34:05.379 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Failed to reset failed state of unit ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@mgr.a.service: Unit ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@mgr.a.service not loaded. 2026-03-06T13:34:05.509 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057.target.wants/ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@mgr.a.service → /etc/systemd/system/ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@.service. 2026-03-06T13:34:05.646 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:05 vm03 podman[50635]: 2026-03-06 13:34:05.617602819 +0100 CET m=+0.014383143 container create 1a2ab987f0731409ecae337fa89257e18b0fb184d162c250a9cc92e591c7ea3f (image=harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default) 2026-03-06T13:34:05.681 INFO:teuthology.orchestra.run.vm03.stdout:firewalld does not appear to be present 2026-03-06T13:34:05.681 INFO:teuthology.orchestra.run.vm03.stdout:Not possible to enable service . firewalld.service is not available 2026-03-06T13:34:05.681 INFO:teuthology.orchestra.run.vm03.stdout:firewalld does not appear to be present 2026-03-06T13:34:05.681 INFO:teuthology.orchestra.run.vm03.stdout:Not possible to open ports <[9283, 8765]>. firewalld.service is not available 2026-03-06T13:34:05.681 INFO:teuthology.orchestra.run.vm03.stdout:Waiting for mgr to start... 2026-03-06T13:34:05.681 INFO:teuthology.orchestra.run.vm03.stdout:Waiting for mgr... 2026-03-06T13:34:05.965 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:05 vm03 podman[50635]: 2026-03-06 13:34:05.660399743 +0100 CET m=+0.057180067 container init 1a2ab987f0731409ecae337fa89257e18b0fb184d162c250a9cc92e591c7ea3f (image=harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8) 2026-03-06T13:34:05.965 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:05 vm03 podman[50635]: 2026-03-06 13:34:05.665073571 +0100 CET m=+0.061853895 container start 1a2ab987f0731409ecae337fa89257e18b0fb184d162c250a9cc92e591c7ea3f (image=harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552) 2026-03-06T13:34:05.965 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:05 vm03 bash[50635]: 1a2ab987f0731409ecae337fa89257e18b0fb184d162c250a9cc92e591c7ea3f 2026-03-06T13:34:05.965 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:05 vm03 podman[50635]: 2026-03-06 13:34:05.611397635 +0100 CET m=+0.008177959 image pull 306e97de47e91c2b4b24d3dc09be3b3a12039b078f343d91220102acc6628a68 harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3 2026-03-06T13:34:05.965 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:05 vm03 systemd[1]: Started Ceph mgr.a for b4d7b36a-1958-11f1-a2a1-8fd8798eb057. 2026-03-06T13:34:05.965 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:05 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:05.899+0000 7fa967c3e100 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-06T13:34:06.111 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-06T13:34:06.111 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout { 2026-03-06T13:34:06.111 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "fsid": "b4d7b36a-1958-11f1-a2a1-8fd8798eb057", 2026-03-06T13:34:06.111 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "health": { 2026-03-06T13:34:06.111 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-06T13:34:06.111 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-06T13:34:06.111 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-06T13:34:06.111 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-06T13:34:06.111 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-06T13:34:06.111 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-06T13:34:06.111 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 0 2026-03-06T13:34:06.111 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-06T13:34:06.111 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-06T13:34:06.111 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "a" 2026-03-06T13:34:06.111 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-06T13:34:06.111 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum_age": 1, 2026-03-06T13:34:06.111 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-06T13:34:06.111 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-06T13:34:06.111 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-06T13:34:06.111 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-06T13:34:06.111 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-06T13:34:06.111 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-06T13:34:06.111 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-06T13:34:06.111 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-06T13:34:06.111 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-06T13:34:06.111 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-06T13:34:06.111 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-06T13:34:06.111 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-06T13:34:06.112 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-06T13:34:06.112 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-06T13:34:06.112 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-06T13:34:06.112 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-06T13:34:06.112 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-06T13:34:06.112 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-06T13:34:06.112 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-06T13:34:06.112 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-06T13:34:06.112 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-06T13:34:06.112 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-06T13:34:06.112 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-06T13:34:06.112 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-06T13:34:06.112 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-06T13:34:06.112 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-06T13:34:06.112 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "btime": "2026-03-06T12:34:03:119101+0000", 2026-03-06T13:34:06.112 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-06T13:34:06.112 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-06T13:34:06.112 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-06T13:34:06.112 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-06T13:34:06.112 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-06T13:34:06.112 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-06T13:34:06.112 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-06T13:34:06.112 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-06T13:34:06.112 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-06T13:34:06.112 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "restful" 2026-03-06T13:34:06.112 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-06T13:34:06.112 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-06T13:34:06.112 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-06T13:34:06.113 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-06T13:34:06.113 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-06T13:34:06.113 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "modified": "2026-03-06T12:34:03.119801+0000", 2026-03-06T13:34:06.113 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-06T13:34:06.113 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-06T13:34:06.113 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-06T13:34:06.113 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout } 2026-03-06T13:34:06.113 INFO:teuthology.orchestra.run.vm03.stdout:mgr not available, waiting (1/15)... 2026-03-06T13:34:06.359 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:06 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:06.040+0000 7fa967c3e100 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-06T13:34:06.359 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:06 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/4113111076' entity='client.admin' 2026-03-06T13:34:06.359 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:06 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/981159086' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-06T13:34:07.609 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:07 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:07.106+0000 7fa967c3e100 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-06T13:34:08.166 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:07 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:07.908+0000 7fa967c3e100 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-06T13:34:08.167 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:08 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:08.022+0000 7fa967c3e100 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-06T13:34:08.442 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:08 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:08.260+0000 7fa967c3e100 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-06T13:34:08.442 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:08 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/1020431738' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-06T13:34:08.533 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-06T13:34:08.533 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout { 2026-03-06T13:34:08.533 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "fsid": "b4d7b36a-1958-11f1-a2a1-8fd8798eb057", 2026-03-06T13:34:08.533 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "health": { 2026-03-06T13:34:08.533 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-06T13:34:08.533 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-06T13:34:08.533 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-06T13:34:08.533 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-06T13:34:08.533 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-06T13:34:08.533 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-06T13:34:08.533 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 0 2026-03-06T13:34:08.533 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-06T13:34:08.533 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-06T13:34:08.533 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "a" 2026-03-06T13:34:08.533 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-06T13:34:08.533 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum_age": 3, 2026-03-06T13:34:08.533 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-06T13:34:08.533 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-06T13:34:08.533 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-06T13:34:08.533 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-06T13:34:08.533 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-06T13:34:08.533 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-06T13:34:08.533 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-06T13:34:08.533 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-06T13:34:08.534 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-06T13:34:08.534 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-06T13:34:08.534 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-06T13:34:08.534 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-06T13:34:08.534 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-06T13:34:08.534 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-06T13:34:08.534 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-06T13:34:08.534 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-06T13:34:08.534 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-06T13:34:08.534 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-06T13:34:08.534 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-06T13:34:08.534 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-06T13:34:08.534 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-06T13:34:08.534 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-06T13:34:08.534 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-06T13:34:08.534 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-06T13:34:08.534 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-06T13:34:08.534 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-06T13:34:08.534 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "btime": "2026-03-06T12:34:03:119101+0000", 2026-03-06T13:34:08.534 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-06T13:34:08.535 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-06T13:34:08.535 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-06T13:34:08.535 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-06T13:34:08.535 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-06T13:34:08.535 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-06T13:34:08.535 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-06T13:34:08.535 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-06T13:34:08.535 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-06T13:34:08.535 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "restful" 2026-03-06T13:34:08.535 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-06T13:34:08.535 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-06T13:34:08.535 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-06T13:34:08.535 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-06T13:34:08.535 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-06T13:34:08.535 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "modified": "2026-03-06T12:34:03.119801+0000", 2026-03-06T13:34:08.535 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-06T13:34:08.535 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-06T13:34:08.535 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-06T13:34:08.535 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout } 2026-03-06T13:34:08.535 INFO:teuthology.orchestra.run.vm03.stdout:mgr not available, waiting (2/15)... 2026-03-06T13:34:10.359 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:10 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:10.060+0000 7fa967c3e100 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-06T13:34:10.644 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:10 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:10.384+0000 7fa967c3e100 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-06T13:34:10.644 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:10 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:10.515+0000 7fa967c3e100 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-06T13:34:10.903 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:10 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/3724600356' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-06T13:34:10.904 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:10 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:10.657+0000 7fa967c3e100 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-06T13:34:10.904 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:10 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:10.802+0000 7fa967c3e100 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-06T13:34:11.013 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-06T13:34:11.013 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout { 2026-03-06T13:34:11.013 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "fsid": "b4d7b36a-1958-11f1-a2a1-8fd8798eb057", 2026-03-06T13:34:11.013 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "health": { 2026-03-06T13:34:11.014 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-06T13:34:11.014 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-06T13:34:11.014 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-06T13:34:11.014 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-06T13:34:11.014 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-06T13:34:11.014 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-06T13:34:11.014 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 0 2026-03-06T13:34:11.014 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-06T13:34:11.014 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-06T13:34:11.014 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "a" 2026-03-06T13:34:11.014 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-06T13:34:11.014 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum_age": 5, 2026-03-06T13:34:11.014 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-06T13:34:11.014 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-06T13:34:11.014 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-06T13:34:11.014 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-06T13:34:11.014 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-06T13:34:11.014 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-06T13:34:11.014 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-06T13:34:11.014 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-06T13:34:11.014 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-06T13:34:11.014 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-06T13:34:11.014 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-06T13:34:11.014 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-06T13:34:11.014 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-06T13:34:11.014 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-06T13:34:11.014 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-06T13:34:11.014 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-06T13:34:11.014 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-06T13:34:11.014 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-06T13:34:11.014 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-06T13:34:11.014 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-06T13:34:11.014 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-06T13:34:11.014 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-06T13:34:11.014 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-06T13:34:11.014 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-06T13:34:11.014 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-06T13:34:11.015 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-06T13:34:11.015 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "btime": "2026-03-06T12:34:03:119101+0000", 2026-03-06T13:34:11.015 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-06T13:34:11.015 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-06T13:34:11.015 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-06T13:34:11.015 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-06T13:34:11.015 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-06T13:34:11.015 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-06T13:34:11.015 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-06T13:34:11.015 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-06T13:34:11.015 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-06T13:34:11.015 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "restful" 2026-03-06T13:34:11.015 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-06T13:34:11.015 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-06T13:34:11.015 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-06T13:34:11.015 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-06T13:34:11.015 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-06T13:34:11.015 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "modified": "2026-03-06T12:34:03.119801+0000", 2026-03-06T13:34:11.015 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-06T13:34:11.015 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-06T13:34:11.015 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-06T13:34:11.015 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout } 2026-03-06T13:34:11.015 INFO:teuthology.orchestra.run.vm03.stdout:mgr not available, waiting (3/15)... 2026-03-06T13:34:11.360 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:10 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:10.954+0000 7fa967c3e100 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-06T13:34:11.860 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:11 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:11.477+0000 7fa967c3e100 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-06T13:34:11.860 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:11 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:11.631+0000 7fa967c3e100 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-06T13:34:12.609 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:12 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:12.331+0000 7fa967c3e100 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-06T13:34:13.445 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-06T13:34:13.445 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout { 2026-03-06T13:34:13.445 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "fsid": "b4d7b36a-1958-11f1-a2a1-8fd8798eb057", 2026-03-06T13:34:13.445 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "health": { 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 0 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "a" 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum_age": 8, 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "btime": "2026-03-06T12:34:03:119101+0000", 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "restful" 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-06T13:34:13.446 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-06T13:34:13.447 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-06T13:34:13.447 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-06T13:34:13.447 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-06T13:34:13.447 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "modified": "2026-03-06T12:34:03.119801+0000", 2026-03-06T13:34:13.447 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-06T13:34:13.447 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-06T13:34:13.447 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-06T13:34:13.447 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout } 2026-03-06T13:34:13.447 INFO:teuthology.orchestra.run.vm03.stdout:mgr not available, waiting (4/15)... 2026-03-06T13:34:13.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:13 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/2969520466' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-06T13:34:13.610 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:13 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:13.383+0000 7fa967c3e100 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-06T13:34:13.610 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:13 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:13.519+0000 7fa967c3e100 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-06T13:34:13.905 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:13 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:13.646+0000 7fa967c3e100 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-06T13:34:13.905 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:13 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:13.902+0000 7fa967c3e100 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-06T13:34:14.331 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:14 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:14.023+0000 7fa967c3e100 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-06T13:34:14.609 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:14 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:14.329+0000 7fa967c3e100 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-06T13:34:15.020 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:14 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:14.665+0000 7fa967c3e100 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-06T13:34:15.020 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:15 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:15.018+0000 7fa967c3e100 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-06T13:34:15.359 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:15 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:15.141+0000 7fa967c3e100 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-06T13:34:15.359 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:15 vm03 ceph-mon[50411]: Activating manager daemon a 2026-03-06T13:34:15.359 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:15 vm03 ceph-mon[50411]: mgrmap e2: a(active, starting, since 0.0140933s) 2026-03-06T13:34:15.359 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:15 vm03 ceph-mon[50411]: from='mgr.14100 192.168.123.103:0/2322862462' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-06T13:34:15.359 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:15 vm03 ceph-mon[50411]: from='mgr.14100 192.168.123.103:0/2322862462' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-06T13:34:15.359 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:15 vm03 ceph-mon[50411]: from='mgr.14100 192.168.123.103:0/2322862462' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-06T13:34:15.359 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:15 vm03 ceph-mon[50411]: from='mgr.14100 192.168.123.103:0/2322862462' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-06T13:34:15.359 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:15 vm03 ceph-mon[50411]: from='mgr.14100 192.168.123.103:0/2322862462' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-06T13:34:15.359 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:15 vm03 ceph-mon[50411]: from='mgr.14100 192.168.123.103:0/2322862462' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-06T13:34:15.359 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:15 vm03 ceph-mon[50411]: from='mgr.14100 192.168.123.103:0/2322862462' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-06T13:34:15.359 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:15 vm03 ceph-mon[50411]: from='mgr.14100 192.168.123.103:0/2322862462' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-06T13:34:15.359 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:15 vm03 ceph-mon[50411]: Manager daemon a is now available 2026-03-06T13:34:15.359 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:15 vm03 ceph-mon[50411]: from='mgr.14100 192.168.123.103:0/2322862462' entity='mgr.a' 2026-03-06T13:34:15.359 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:15 vm03 ceph-mon[50411]: from='mgr.14100 192.168.123.103:0/2322862462' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-06T13:34:15.359 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:15 vm03 ceph-mon[50411]: from='mgr.14100 192.168.123.103:0/2322862462' entity='mgr.a' 2026-03-06T13:34:15.359 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:15 vm03 ceph-mon[50411]: from='mgr.14100 192.168.123.103:0/2322862462' entity='mgr.a' 2026-03-06T13:34:15.359 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:15 vm03 ceph-mon[50411]: from='mgr.14100 192.168.123.103:0/2322862462' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-06T13:34:16.046 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout { 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "fsid": "b4d7b36a-1958-11f1-a2a1-8fd8798eb057", 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "health": { 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 0 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "a" 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum_age": 10, 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "btime": "2026-03-06T12:34:03:119101+0000", 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "restful" 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-06T13:34:16.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "modified": "2026-03-06T12:34:03.119801+0000", 2026-03-06T13:34:16.048 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-06T13:34:16.048 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-06T13:34:16.048 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-06T13:34:16.048 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout } 2026-03-06T13:34:16.048 INFO:teuthology.orchestra.run.vm03.stdout:mgr not available, waiting (5/15)... 2026-03-06T13:34:16.859 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:16 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/2545344137' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-06T13:34:18.109 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:17 vm03 ceph-mon[50411]: mgrmap e3: a(active, since 1.20606s) 2026-03-06T13:34:18.815 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-06T13:34:18.815 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout { 2026-03-06T13:34:18.815 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "fsid": "b4d7b36a-1958-11f1-a2a1-8fd8798eb057", 2026-03-06T13:34:18.815 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "health": { 2026-03-06T13:34:18.815 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-06T13:34:18.815 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-06T13:34:18.815 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-06T13:34:18.815 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-06T13:34:18.815 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-06T13:34:18.815 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-06T13:34:18.815 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 0 2026-03-06T13:34:18.815 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-06T13:34:18.815 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-06T13:34:18.815 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "a" 2026-03-06T13:34:18.815 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-06T13:34:18.815 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum_age": 13, 2026-03-06T13:34:18.815 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-06T13:34:18.815 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-06T13:34:18.815 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-06T13:34:18.815 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-06T13:34:18.815 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-06T13:34:18.815 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-06T13:34:18.815 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-06T13:34:18.815 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-06T13:34:18.815 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-06T13:34:18.815 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-06T13:34:18.815 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-06T13:34:18.815 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-06T13:34:18.815 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-06T13:34:18.815 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-06T13:34:18.815 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-06T13:34:18.815 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-06T13:34:18.815 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-06T13:34:18.816 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-06T13:34:18.816 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-06T13:34:18.816 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-06T13:34:18.816 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-06T13:34:18.816 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-06T13:34:18.816 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-06T13:34:18.816 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-06T13:34:18.816 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-06T13:34:18.816 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-06T13:34:18.816 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "btime": "2026-03-06T12:34:03:119101+0000", 2026-03-06T13:34:18.816 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-06T13:34:18.816 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-06T13:34:18.816 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-06T13:34:18.816 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-06T13:34:18.816 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-06T13:34:18.816 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-06T13:34:18.816 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-06T13:34:18.816 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-06T13:34:18.816 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-06T13:34:18.816 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "restful" 2026-03-06T13:34:18.816 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-06T13:34:18.816 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-06T13:34:18.816 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-06T13:34:18.816 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-06T13:34:18.816 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-06T13:34:18.816 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "modified": "2026-03-06T12:34:03.119801+0000", 2026-03-06T13:34:18.816 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-06T13:34:18.816 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-06T13:34:18.816 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-06T13:34:18.816 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout } 2026-03-06T13:34:18.816 INFO:teuthology.orchestra.run.vm03.stdout:mgr is available 2026-03-06T13:34:19.192 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:19 vm03 ceph-mon[50411]: mgrmap e4: a(active, since 2s) 2026-03-06T13:34:19.192 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:19 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/1994586684' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-06T13:34:19.346 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-06T13:34:19.346 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout [global] 2026-03-06T13:34:19.346 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout fsid = b4d7b36a-1958-11f1-a2a1-8fd8798eb057 2026-03-06T13:34:19.346 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-06T13:34:19.346 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.103:3300,v1:192.168.123.103:6789] 2026-03-06T13:34:19.346 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-06T13:34:19.346 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-06T13:34:19.346 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-06T13:34:19.346 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-06T13:34:19.346 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-06T13:34:19.346 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-06T13:34:19.346 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-06T13:34:19.346 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-06T13:34:19.346 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout [osd] 2026-03-06T13:34:19.346 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-06T13:34:19.346 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-06T13:34:19.346 INFO:teuthology.orchestra.run.vm03.stdout:Enabling cephadm module... 2026-03-06T13:34:20.390 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:20 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: ignoring --setuser ceph since I am not root 2026-03-06T13:34:20.390 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:20 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: ignoring --setgroup ceph since I am not root 2026-03-06T13:34:20.391 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:20 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/697273189' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-06T13:34:20.391 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:20 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/697273189' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished 2026-03-06T13:34:20.391 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:20 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/528042247' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-06T13:34:20.696 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:20 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:20.495+0000 7efc74084100 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-06T13:34:20.696 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:20 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:20.628+0000 7efc74084100 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-06T13:34:21.021 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout { 2026-03-06T13:34:21.021 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 5, 2026-03-06T13:34:21.021 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-06T13:34:21.021 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "active_name": "a", 2026-03-06T13:34:21.021 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-06T13:34:21.021 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout } 2026-03-06T13:34:21.021 INFO:teuthology.orchestra.run.vm03.stdout:Waiting for the mgr to restart... 2026-03-06T13:34:21.021 INFO:teuthology.orchestra.run.vm03.stdout:Waiting for mgr epoch 5... 2026-03-06T13:34:21.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:21 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/528042247' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-06T13:34:21.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:21 vm03 ceph-mon[50411]: mgrmap e5: a(active, since 5s) 2026-03-06T13:34:21.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:21 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/3498415698' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-06T13:34:22.109 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:21 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:21.812+0000 7efc74084100 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-06T13:34:23.109 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:22 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:22.751+0000 7efc74084100 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-06T13:34:23.109 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:22 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:22.876+0000 7efc74084100 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-06T13:34:23.609 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:23 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:23.142+0000 7efc74084100 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-06T13:34:25.359 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:25 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:25.068+0000 7efc74084100 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-06T13:34:25.700 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:25 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:25.427+0000 7efc74084100 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-06T13:34:25.700 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:25 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:25.568+0000 7efc74084100 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-06T13:34:25.991 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:25 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:25.697+0000 7efc74084100 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-06T13:34:25.991 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:25 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:25.848+0000 7efc74084100 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-06T13:34:26.359 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:25 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:25.988+0000 7efc74084100 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-06T13:34:26.859 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:26 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:26.530+0000 7efc74084100 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-06T13:34:26.859 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:26 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:26.687+0000 7efc74084100 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-06T13:34:27.859 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:27 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:27.463+0000 7efc74084100 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-06T13:34:28.842 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:28 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:28.566+0000 7efc74084100 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-06T13:34:28.842 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:28 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:28.696+0000 7efc74084100 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-06T13:34:29.109 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:28 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:28.839+0000 7efc74084100 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-06T13:34:29.558 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:29 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:29.120+0000 7efc74084100 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-06T13:34:29.558 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:29 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:29.245+0000 7efc74084100 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-06T13:34:29.859 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:29 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:29.555+0000 7efc74084100 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-06T13:34:30.311 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:29 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:29.925+0000 7efc74084100 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-06T13:34:30.609 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:30 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:30.309+0000 7efc74084100 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-06T13:34:30.610 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:30 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:30.439+0000 7efc74084100 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-06T13:34:31.109 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:30 vm03 ceph-mon[50411]: Active manager daemon a restarted 2026-03-06T13:34:31.110 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:30 vm03 ceph-mon[50411]: Activating manager daemon a 2026-03-06T13:34:31.110 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:30 vm03 ceph-mon[50411]: osdmap e2: 0 total, 0 up, 0 in 2026-03-06T13:34:31.110 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:30 vm03 ceph-mon[50411]: mgrmap e6: a(active, starting, since 0.198152s) 2026-03-06T13:34:31.110 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:30 vm03 ceph-mon[50411]: from='mgr.14124 192.168.123.103:0/4178572541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-06T13:34:31.110 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:30 vm03 ceph-mon[50411]: from='mgr.14124 192.168.123.103:0/4178572541' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-06T13:34:31.110 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:30 vm03 ceph-mon[50411]: from='mgr.14124 192.168.123.103:0/4178572541' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-06T13:34:31.110 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:30 vm03 ceph-mon[50411]: from='mgr.14124 192.168.123.103:0/4178572541' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-06T13:34:31.110 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:30 vm03 ceph-mon[50411]: from='mgr.14124 192.168.123.103:0/4178572541' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-06T13:34:31.110 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:30 vm03 ceph-mon[50411]: Manager daemon a is now available 2026-03-06T13:34:31.110 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:30 vm03 ceph-mon[50411]: from='mgr.14124 192.168.123.103:0/4178572541' entity='mgr.a' 2026-03-06T13:34:31.110 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:30 vm03 ceph-mon[50411]: from='mgr.14124 192.168.123.103:0/4178572541' entity='mgr.a' 2026-03-06T13:34:31.110 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:30 vm03 ceph-mon[50411]: from='mgr.14124 192.168.123.103:0/4178572541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T13:34:31.110 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:30 vm03 ceph-mon[50411]: from='mgr.14124 192.168.123.103:0/4178572541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T13:34:31.110 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:30 vm03 ceph-mon[50411]: from='mgr.14124 192.168.123.103:0/4178572541' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-06T13:34:31.110 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:30 vm03 ceph-mon[50411]: from='mgr.14124 192.168.123.103:0/4178572541' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-06T13:34:31.808 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout { 2026-03-06T13:34:31.808 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 7, 2026-03-06T13:34:31.808 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-06T13:34:31.808 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout } 2026-03-06T13:34:31.808 INFO:teuthology.orchestra.run.vm03.stdout:mgr epoch 5 is available 2026-03-06T13:34:31.808 INFO:teuthology.orchestra.run.vm03.stdout:Setting orchestrator backend to cephadm... 2026-03-06T13:34:32.197 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:32 vm03 ceph-mon[50411]: Found migration_current of "None". Setting to last migration. 2026-03-06T13:34:32.197 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:32 vm03 ceph-mon[50411]: from='mgr.14124 192.168.123.103:0/4178572541' entity='mgr.a' 2026-03-06T13:34:32.197 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:32 vm03 ceph-mon[50411]: from='mgr.14124 192.168.123.103:0/4178572541' entity='mgr.a' 2026-03-06T13:34:32.197 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:32 vm03 ceph-mon[50411]: mgrmap e7: a(active, since 1.201s) 2026-03-06T13:34:32.821 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout value unchanged 2026-03-06T13:34:32.821 INFO:teuthology.orchestra.run.vm03.stdout:Generating ssh key... 2026-03-06T13:34:33.248 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:33 vm03 ceph-mon[50411]: from='client.14128 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-06T13:34:33.248 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:33 vm03 ceph-mon[50411]: from='client.14128 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-06T13:34:33.248 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:33 vm03 ceph-mon[50411]: [06/Mar/2026:12:34:32] ENGINE Bus STARTING 2026-03-06T13:34:33.248 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:33 vm03 ceph-mon[50411]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T13:34:33.248 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:33 vm03 ceph-mon[50411]: from='mgr.14124 192.168.123.103:0/4178572541' entity='mgr.a' 2026-03-06T13:34:33.248 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:33 vm03 ceph-mon[50411]: from='mgr.14124 192.168.123.103:0/4178572541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T13:34:33.248 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:33 vm03 ceph-mon[50411]: from='mgr.14124 192.168.123.103:0/4178572541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T13:34:33.509 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:33 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: Generating public/private rsa key pair. 2026-03-06T13:34:33.509 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:33 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: Your identification has been saved in /tmp/tmpxggdwfvb/key 2026-03-06T13:34:33.509 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:33 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: Your public key has been saved in /tmp/tmpxggdwfvb/key.pub 2026-03-06T13:34:33.509 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:33 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: The key fingerprint is: 2026-03-06T13:34:33.509 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:33 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: SHA256:i+RCFd6jqTdeqWuLdfJw+6toBOSwxKJS7uh42Gry82Y ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057 2026-03-06T13:34:33.509 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:33 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: The key's randomart image is: 2026-03-06T13:34:33.509 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:33 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: +---[RSA 3072]----+ 2026-03-06T13:34:33.509 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:33 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: | . . | 2026-03-06T13:34:33.509 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:33 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: |. = .. o | 2026-03-06T13:34:33.509 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:33 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: |.= = o o | 2026-03-06T13:34:33.509 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:33 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: |o o o. o . | 2026-03-06T13:34:33.509 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:33 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: |.o ..+ S | 2026-03-06T13:34:33.509 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:33 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: |. .. +.. o | 2026-03-06T13:34:33.509 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:33 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: |oo o.O * | 2026-03-06T13:34:33.509 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:33 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: |+o+ E*o@ . | 2026-03-06T13:34:33.509 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:33 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: |++.=oo*o+oo. | 2026-03-06T13:34:33.509 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:33 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: +----[SHA256]-----+ 2026-03-06T13:34:33.886 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCQsppUibzq0HHwOu8JC528H6mbmfyDQu6/fS/RMK+o4LsmXkg+sq+egOZrB7WD08zM7xwktM2B2ZOFUsOHPgTEDjFT08MdwOj2W7SOIw+QvtVrKZcPUTORvxPDqSOqpQsMf4LTscPgMNfwG67FEj++4tO5+916kmyVIg7XKetYeaONBeCYChG08MJOyIklw9Jz8kSk9Q3JUZChigpbtj8nN4Go7SkyKJ3hUF6D5wAhYt/IssJ3WEpbme9DXizEYDNze3xEqs15EvT44QbMHUauJznwNNxH7tv36TuJebiMG84jW611rXlrLeEAuPeVXRZJgdvzoMgbo5M64en5u8D/OrlTWmt7jhLGpc71tYwfMqy7vsqkEKkvKOSP4UwG9rRsjBriICfDphfTRgWbQg+dfQ05mPyPEoAHkiiDn+V9SEOLM9WaGjQvufOhXruNTMkjfX9Q1/9IKbAqIFqcW6ODSQmSi6Ffz08l9gNDJbo9jfZQYyEtAGabzTGnOly6cwk= ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057 2026-03-06T13:34:33.887 INFO:teuthology.orchestra.run.vm03.stdout:Wrote public SSH key to /home/ubuntu/cephtest/ceph.pub 2026-03-06T13:34:33.887 INFO:teuthology.orchestra.run.vm03.stdout:Adding key to root@localhost authorized_keys... 2026-03-06T13:34:33.887 INFO:teuthology.orchestra.run.vm03.stdout:Adding host vm03... 2026-03-06T13:34:34.469 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:34 vm03 ceph-mon[50411]: [06/Mar/2026:12:34:32] ENGINE Serving on http://192.168.123.103:8765 2026-03-06T13:34:34.469 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:34 vm03 ceph-mon[50411]: [06/Mar/2026:12:34:32] ENGINE Serving on https://192.168.123.103:7150 2026-03-06T13:34:34.469 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:34 vm03 ceph-mon[50411]: [06/Mar/2026:12:34:32] ENGINE Bus STARTED 2026-03-06T13:34:34.469 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:34 vm03 ceph-mon[50411]: [06/Mar/2026:12:34:32] ENGINE Client ('192.168.123.103', 60500) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-06T13:34:34.469 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:34 vm03 ceph-mon[50411]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T13:34:34.469 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:34 vm03 ceph-mon[50411]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T13:34:34.469 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:34 vm03 ceph-mon[50411]: Generating ssh key... 2026-03-06T13:34:34.469 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:34 vm03 ceph-mon[50411]: mgrmap e8: a(active, since 2s) 2026-03-06T13:34:34.469 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:34 vm03 ceph-mon[50411]: from='mgr.14124 192.168.123.103:0/4178572541' entity='mgr.a' 2026-03-06T13:34:34.469 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:34 vm03 ceph-mon[50411]: from='mgr.14124 192.168.123.103:0/4178572541' entity='mgr.a' 2026-03-06T13:34:35.193 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:35 vm03 ceph-mon[50411]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T13:34:36.609 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:36 vm03 ceph-mon[50411]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm03", "addr": "192.168.123.103", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T13:34:36.609 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:36 vm03 ceph-mon[50411]: Deploying cephadm binary to vm03 2026-03-06T13:34:36.883 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout Added host 'vm03' with addr '192.168.123.103' 2026-03-06T13:34:36.883 INFO:teuthology.orchestra.run.vm03.stdout:Deploying unmanaged mon service... 2026-03-06T13:34:37.482 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout Scheduled mon update... 2026-03-06T13:34:37.482 INFO:teuthology.orchestra.run.vm03.stdout:Deploying unmanaged mgr service... 2026-03-06T13:34:38.109 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout Scheduled mgr update... 2026-03-06T13:34:38.145 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:37 vm03 ceph-mon[50411]: from='mgr.14124 192.168.123.103:0/4178572541' entity='mgr.a' 2026-03-06T13:34:38.145 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:37 vm03 ceph-mon[50411]: Added host vm03 2026-03-06T13:34:38.145 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:37 vm03 ceph-mon[50411]: from='mgr.14124 192.168.123.103:0/4178572541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T13:34:38.145 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:37 vm03 ceph-mon[50411]: from='mgr.14124 192.168.123.103:0/4178572541' entity='mgr.a' 2026-03-06T13:34:39.172 INFO:teuthology.orchestra.run.vm03.stdout:Enabling the dashboard module... 2026-03-06T13:34:39.212 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:38 vm03 ceph-mon[50411]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T13:34:39.213 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:38 vm03 ceph-mon[50411]: Saving service mon spec with placement count:5 2026-03-06T13:34:39.213 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:38 vm03 ceph-mon[50411]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T13:34:39.213 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:38 vm03 ceph-mon[50411]: Saving service mgr spec with placement count:2 2026-03-06T13:34:39.213 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:38 vm03 ceph-mon[50411]: from='mgr.14124 192.168.123.103:0/4178572541' entity='mgr.a' 2026-03-06T13:34:39.213 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:38 vm03 ceph-mon[50411]: from='mgr.14124 192.168.123.103:0/4178572541' entity='mgr.a' 2026-03-06T13:34:39.213 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:38 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/1414573141' entity='client.admin' 2026-03-06T13:34:39.213 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:38 vm03 ceph-mon[50411]: from='mgr.14124 192.168.123.103:0/4178572541' entity='mgr.a' 2026-03-06T13:34:40.247 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:40 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/1407388206' entity='client.admin' 2026-03-06T13:34:40.247 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:40 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/816502162' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-06T13:34:40.247 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:40 vm03 ceph-mon[50411]: from='mgr.14124 192.168.123.103:0/4178572541' entity='mgr.a' 2026-03-06T13:34:40.248 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:40 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: ignoring --setuser ceph since I am not root 2026-03-06T13:34:40.248 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:40 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: ignoring --setgroup ceph since I am not root 2026-03-06T13:34:40.518 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:40 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:40.371+0000 7fbf137ca100 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-06T13:34:40.518 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:40 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:40.515+0000 7fbf137ca100 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-06T13:34:40.716 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout { 2026-03-06T13:34:40.716 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 9, 2026-03-06T13:34:40.716 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-06T13:34:40.716 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "active_name": "a", 2026-03-06T13:34:40.716 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-06T13:34:40.716 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout } 2026-03-06T13:34:40.716 INFO:teuthology.orchestra.run.vm03.stdout:Waiting for the mgr to restart... 2026-03-06T13:34:40.716 INFO:teuthology.orchestra.run.vm03.stdout:Waiting for mgr epoch 9... 2026-03-06T13:34:41.109 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:41 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/816502162' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-06T13:34:41.109 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:41 vm03 ceph-mon[50411]: mgrmap e9: a(active, since 9s) 2026-03-06T13:34:41.109 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:41 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/1143975880' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-06T13:34:42.109 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:41 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:41.762+0000 7fbf137ca100 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-06T13:34:43.063 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:42 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:42.656+0000 7fbf137ca100 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-06T13:34:43.063 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:42 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:42.792+0000 7fbf137ca100 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-06T13:34:43.359 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:43 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:43.061+0000 7fbf137ca100 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-06T13:34:45.319 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:44 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:44.981+0000 7fbf137ca100 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-06T13:34:45.574 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:45 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:45.317+0000 7fbf137ca100 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-06T13:34:45.574 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:45 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:45.452+0000 7fbf137ca100 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-06T13:34:45.839 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:45 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:45.572+0000 7fbf137ca100 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-06T13:34:45.839 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:45 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:45.712+0000 7fbf137ca100 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-06T13:34:46.109 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:45 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:45.836+0000 7fbf137ca100 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-06T13:34:46.609 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:46 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:46.354+0000 7fbf137ca100 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-06T13:34:46.609 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:46 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:46.509+0000 7fbf137ca100 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-06T13:34:47.609 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:47 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:47.233+0000 7fbf137ca100 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-06T13:34:48.609 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:48 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:48.295+0000 7fbf137ca100 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-06T13:34:48.610 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:48 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:48.416+0000 7fbf137ca100 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-06T13:34:48.610 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:48 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:48.544+0000 7fbf137ca100 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-06T13:34:49.109 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:48 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:48.798+0000 7fbf137ca100 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-06T13:34:49.109 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:48 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:48.918+0000 7fbf137ca100 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-06T13:34:49.544 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:49 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:49.216+0000 7fbf137ca100 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-06T13:34:49.859 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:49 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:49.542+0000 7fbf137ca100 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-06T13:34:50.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:49 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:49.893+0000 7fbf137ca100 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-06T13:34:50.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:34:50 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[50645]: 2026-03-06T12:34:50.011+0000 7fbf137ca100 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-06T13:34:50.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:50 vm03 ceph-mon[50411]: Active manager daemon a restarted 2026-03-06T13:34:50.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:50 vm03 ceph-mon[50411]: Activating manager daemon a 2026-03-06T13:34:50.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:50 vm03 ceph-mon[50411]: osdmap e3: 0 total, 0 up, 0 in 2026-03-06T13:34:50.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:50 vm03 ceph-mon[50411]: mgrmap e10: a(active, starting, since 0.0108842s) 2026-03-06T13:34:50.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:50 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-06T13:34:50.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:50 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-06T13:34:50.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:50 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-06T13:34:50.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:50 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-06T13:34:50.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:50 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-06T13:34:50.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:50 vm03 ceph-mon[50411]: Manager daemon a is now available 2026-03-06T13:34:50.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:50 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T13:34:50.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:50 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-06T13:34:50.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:50 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-06T13:34:51.195 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout { 2026-03-06T13:34:51.195 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 11, 2026-03-06T13:34:51.195 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-06T13:34:51.195 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout } 2026-03-06T13:34:51.195 INFO:teuthology.orchestra.run.vm03.stdout:mgr epoch 9 is available 2026-03-06T13:34:51.195 INFO:teuthology.orchestra.run.vm03.stdout:Generating a dashboard self-signed certificate... 2026-03-06T13:34:51.773 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout Self-signed certificate created 2026-03-06T13:34:51.773 INFO:teuthology.orchestra.run.vm03.stdout:Creating initial admin user... 2026-03-06T13:34:51.895 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:51 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:34:51.895 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:51 vm03 ceph-mon[50411]: [06/Mar/2026:12:34:50] ENGINE Bus STARTING 2026-03-06T13:34:51.895 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:51 vm03 ceph-mon[50411]: from='client.14160 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-06T13:34:51.895 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:51 vm03 ceph-mon[50411]: mgrmap e11: a(active, since 1.01969s) 2026-03-06T13:34:51.895 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:51 vm03 ceph-mon[50411]: from='client.14160 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-06T13:34:51.895 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:51 vm03 ceph-mon[50411]: [06/Mar/2026:12:34:51] ENGINE Serving on https://192.168.123.103:7150 2026-03-06T13:34:51.895 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:51 vm03 ceph-mon[50411]: [06/Mar/2026:12:34:51] ENGINE Client ('192.168.123.103', 50762) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-06T13:34:51.895 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:51 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:34:51.895 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:51 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:34:51.895 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:51 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:34:52.445 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout {"username": "admin", "password": "$2b$12$uwS2NmorxXPPprUr.CzIGenwgFj1VtOSAzJl7W8vLOl8FBE6XqrOO", "roles": ["administrator"], "name": null, "email": null, "lastUpdate": 1772800492, "enabled": true, "pwdExpirationDate": null, "pwdUpdateRequired": true} 2026-03-06T13:34:52.445 INFO:teuthology.orchestra.run.vm03.stdout:Fetching dashboard port number... 2026-03-06T13:34:52.917 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 8443 2026-03-06T13:34:52.918 INFO:teuthology.orchestra.run.vm03.stdout:firewalld does not appear to be present 2026-03-06T13:34:52.918 INFO:teuthology.orchestra.run.vm03.stdout:Not possible to open ports <[8443]>. firewalld.service is not available 2026-03-06T13:34:52.920 INFO:teuthology.orchestra.run.vm03.stdout:Ceph Dashboard is now available at: 2026-03-06T13:34:52.920 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:34:52.920 INFO:teuthology.orchestra.run.vm03.stdout: URL: https://vm03.local:8443/ 2026-03-06T13:34:52.920 INFO:teuthology.orchestra.run.vm03.stdout: User: admin 2026-03-06T13:34:52.920 INFO:teuthology.orchestra.run.vm03.stdout: Password: gc7j41gdrm 2026-03-06T13:34:52.920 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:34:52.920 INFO:teuthology.orchestra.run.vm03.stdout:Saving cluster configuration to /var/lib/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/config directory 2026-03-06T13:34:53.036 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:52 vm03 ceph-mon[50411]: [06/Mar/2026:12:34:51] ENGINE Serving on http://192.168.123.103:8765 2026-03-06T13:34:53.036 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:52 vm03 ceph-mon[50411]: [06/Mar/2026:12:34:51] ENGINE Bus STARTED 2026-03-06T13:34:53.036 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:52 vm03 ceph-mon[50411]: from='client.14168 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T13:34:53.036 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:52 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:34:53.036 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:52 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/903797689' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-06T13:34:53.691 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stderr set mgr/dashboard/cluster/status 2026-03-06T13:34:53.691 INFO:teuthology.orchestra.run.vm03.stdout:You can access the Ceph CLI as following in case of multi-cluster or non-default config: 2026-03-06T13:34:53.691 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:34:53.691 INFO:teuthology.orchestra.run.vm03.stdout: sudo /home/ubuntu/cephtest/cephadm shell --fsid b4d7b36a-1958-11f1-a2a1-8fd8798eb057 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring 2026-03-06T13:34:53.691 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:34:53.691 INFO:teuthology.orchestra.run.vm03.stdout:Or, if you are only running a single cluster on this host: 2026-03-06T13:34:53.691 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:34:53.691 INFO:teuthology.orchestra.run.vm03.stdout: sudo /home/ubuntu/cephtest/cephadm shell 2026-03-06T13:34:53.691 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:34:53.691 INFO:teuthology.orchestra.run.vm03.stdout:Please consider enabling telemetry to help improve Ceph: 2026-03-06T13:34:53.691 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:34:53.691 INFO:teuthology.orchestra.run.vm03.stdout: ceph telemetry on 2026-03-06T13:34:53.691 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:34:53.691 INFO:teuthology.orchestra.run.vm03.stdout:For more information see: 2026-03-06T13:34:53.691 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:34:53.691 INFO:teuthology.orchestra.run.vm03.stdout: https://docs.ceph.com/en/latest/mgr/telemetry/ 2026-03-06T13:34:53.691 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:34:53.691 INFO:teuthology.orchestra.run.vm03.stdout:Bootstrap complete. 2026-03-06T13:34:53.728 INFO:tasks.cephadm:Fetching config... 2026-03-06T13:34:53.728 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-06T13:34:53.728 DEBUG:teuthology.orchestra.run.vm03:> dd if=/etc/ceph/ceph.conf of=/dev/stdout 2026-03-06T13:34:53.752 INFO:tasks.cephadm:Fetching client.admin keyring... 2026-03-06T13:34:53.752 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-06T13:34:53.752 DEBUG:teuthology.orchestra.run.vm03:> dd if=/etc/ceph/ceph.client.admin.keyring of=/dev/stdout 2026-03-06T13:34:53.816 INFO:tasks.cephadm:Fetching mon keyring... 2026-03-06T13:34:53.816 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-06T13:34:53.816 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/var/lib/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/mon.a/keyring of=/dev/stdout 2026-03-06T13:34:53.884 INFO:tasks.cephadm:Fetching pub ssh key... 2026-03-06T13:34:53.884 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-06T13:34:53.884 DEBUG:teuthology.orchestra.run.vm03:> dd if=/home/ubuntu/cephtest/ceph.pub of=/dev/stdout 2026-03-06T13:34:53.945 INFO:tasks.cephadm:Installing pub ssh key for root users... 2026-03-06T13:34:53.945 DEBUG:teuthology.orchestra.run.vm03:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCQsppUibzq0HHwOu8JC528H6mbmfyDQu6/fS/RMK+o4LsmXkg+sq+egOZrB7WD08zM7xwktM2B2ZOFUsOHPgTEDjFT08MdwOj2W7SOIw+QvtVrKZcPUTORvxPDqSOqpQsMf4LTscPgMNfwG67FEj++4tO5+916kmyVIg7XKetYeaONBeCYChG08MJOyIklw9Jz8kSk9Q3JUZChigpbtj8nN4Go7SkyKJ3hUF6D5wAhYt/IssJ3WEpbme9DXizEYDNze3xEqs15EvT44QbMHUauJznwNNxH7tv36TuJebiMG84jW611rXlrLeEAuPeVXRZJgdvzoMgbo5M64en5u8D/OrlTWmt7jhLGpc71tYwfMqy7vsqkEKkvKOSP4UwG9rRsjBriICfDphfTRgWbQg+dfQ05mPyPEoAHkiiDn+V9SEOLM9WaGjQvufOhXruNTMkjfX9Q1/9IKbAqIFqcW6ODSQmSi6Ffz08l9gNDJbo9jfZQYyEtAGabzTGnOly6cwk= ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-06T13:34:54.007 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:53 vm03 ceph-mon[50411]: from='client.14170 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T13:34:54.007 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:53 vm03 ceph-mon[50411]: mgrmap e12: a(active, since 2s) 2026-03-06T13:34:54.007 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:53 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/4090643314' entity='client.admin' 2026-03-06T13:34:54.021 INFO:teuthology.orchestra.run.vm03.stdout:ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCQsppUibzq0HHwOu8JC528H6mbmfyDQu6/fS/RMK+o4LsmXkg+sq+egOZrB7WD08zM7xwktM2B2ZOFUsOHPgTEDjFT08MdwOj2W7SOIw+QvtVrKZcPUTORvxPDqSOqpQsMf4LTscPgMNfwG67FEj++4tO5+916kmyVIg7XKetYeaONBeCYChG08MJOyIklw9Jz8kSk9Q3JUZChigpbtj8nN4Go7SkyKJ3hUF6D5wAhYt/IssJ3WEpbme9DXizEYDNze3xEqs15EvT44QbMHUauJznwNNxH7tv36TuJebiMG84jW611rXlrLeEAuPeVXRZJgdvzoMgbo5M64en5u8D/OrlTWmt7jhLGpc71tYwfMqy7vsqkEKkvKOSP4UwG9rRsjBriICfDphfTRgWbQg+dfQ05mPyPEoAHkiiDn+V9SEOLM9WaGjQvufOhXruNTMkjfX9Q1/9IKbAqIFqcW6ODSQmSi6Ffz08l9gNDJbo9jfZQYyEtAGabzTGnOly6cwk= ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057 2026-03-06T13:34:54.032 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b4d7b36a-1958-11f1-a2a1-8fd8798eb057 -- ceph config set mgr mgr/cephadm/allow_ptrace true 2026-03-06T13:34:54.445 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/mon.a/config 2026-03-06T13:34:54.998 INFO:tasks.cephadm:Distributing conf and client.admin keyring to all hosts + 0755 2026-03-06T13:34:54.999 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b4d7b36a-1958-11f1-a2a1-8fd8798eb057 -- ceph orch client-keyring set client.admin '*' --mode 0755 2026-03-06T13:34:55.381 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/mon.a/config 2026-03-06T13:34:55.914 INFO:tasks.cephadm:Setting crush tunables to default 2026-03-06T13:34:55.914 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b4d7b36a-1958-11f1-a2a1-8fd8798eb057 -- ceph osd crush tunables default 2026-03-06T13:34:56.110 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:55 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/2927780308' entity='client.admin' 2026-03-06T13:34:56.110 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:55 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:34:56.110 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:55 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:34:56.110 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:55 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-06T13:34:56.110 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:55 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:34:56.110 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:55 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T13:34:56.110 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:55 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:34:56.110 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:55 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:34:56.110 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:55 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T13:34:56.110 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:55 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T13:34:56.110 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:55 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T13:34:56.288 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/mon.a/config 2026-03-06T13:34:57.359 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:56 vm03 ceph-mon[50411]: from='client.14178 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T13:34:57.359 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:56 vm03 ceph-mon[50411]: Updating vm03:/etc/ceph/ceph.conf 2026-03-06T13:34:57.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:56 vm03 ceph-mon[50411]: Updating vm03:/var/lib/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/config/ceph.conf 2026-03-06T13:34:57.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:56 vm03 ceph-mon[50411]: Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-06T13:34:57.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:56 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:34:57.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:56 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:34:57.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:56 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:34:57.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:56 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/832943652' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-06T13:34:57.394 INFO:teuthology.orchestra.run.vm03.stderr:adjusted tunables profile to default 2026-03-06T13:34:57.579 INFO:tasks.cephadm:Adding mon.a on vm03 2026-03-06T13:34:57.579 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b4d7b36a-1958-11f1-a2a1-8fd8798eb057 -- ceph orch apply mon '1;vm03:192.168.123.103=a' 2026-03-06T13:34:57.902 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/mon.a/config 2026-03-06T13:34:58.230 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:57 vm03 ceph-mon[50411]: Updating vm03:/var/lib/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/config/ceph.client.admin.keyring 2026-03-06T13:34:58.230 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:57 vm03 ceph-mon[50411]: mgrmap e13: a(active, since 6s) 2026-03-06T13:34:58.230 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:57 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/832943652' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-06T13:34:58.230 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:57 vm03 ceph-mon[50411]: osdmap e4: 0 total, 0 up, 0 in 2026-03-06T13:34:58.231 INFO:teuthology.orchestra.run.vm03.stdout:Scheduled mon update... 2026-03-06T13:34:58.420 INFO:tasks.cephadm:Waiting for 1 mons in monmap... 2026-03-06T13:34:58.420 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b4d7b36a-1958-11f1-a2a1-8fd8798eb057 -- ceph mon dump -f json 2026-03-06T13:34:58.840 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/mon.a/config 2026-03-06T13:34:59.227 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:34:59.228 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":1,"fsid":"b4d7b36a-1958-11f1-a2a1-8fd8798eb057","modified":"2026-03-06T12:34:01.596754Z","created":"2026-03-06T12:34:01.596754Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:3300","nonce":0},{"type":"v1","addr":"192.168.123.103:6789","nonce":0}]},"addr":"192.168.123.103:6789/0","public_addr":"192.168.123.103:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-06T13:34:59.228 INFO:teuthology.orchestra.run.vm03.stderr:dumped monmap epoch 1 2026-03-06T13:34:59.386 INFO:tasks.cephadm:Generating final ceph.conf file... 2026-03-06T13:34:59.386 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b4d7b36a-1958-11f1-a2a1-8fd8798eb057 -- ceph config generate-minimal-conf 2026-03-06T13:34:59.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:59 vm03 ceph-mon[50411]: from='client.14182 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "1;vm03:192.168.123.103=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T13:34:59.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:59 vm03 ceph-mon[50411]: Saving service mon spec with placement vm03:192.168.123.103=a;count:1 2026-03-06T13:34:59.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:59 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:34:59.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:59 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T13:34:59.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:59 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T13:34:59.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:59 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T13:34:59.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:59 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:34:59.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:59 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:34:59.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:59 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:34:59.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:59 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:34:59.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:59 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:34:59.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:59 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:34:59.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:59 vm03 ceph-mon[50411]: Reconfiguring mon.a (unknown last config time)... 2026-03-06T13:34:59.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:59 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-06T13:34:59.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:59 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-06T13:34:59.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:59 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T13:34:59.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:59 vm03 ceph-mon[50411]: Reconfiguring daemon mon.a on vm03 2026-03-06T13:34:59.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:59 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:34:59.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:59 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:34:59.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:34:59 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/2296907924' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-06T13:34:59.709 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/mon.a/config 2026-03-06T13:35:00.059 INFO:teuthology.orchestra.run.vm03.stdout:# minimal ceph.conf for b4d7b36a-1958-11f1-a2a1-8fd8798eb057 2026-03-06T13:35:00.059 INFO:teuthology.orchestra.run.vm03.stdout:[global] 2026-03-06T13:35:00.059 INFO:teuthology.orchestra.run.vm03.stdout: fsid = b4d7b36a-1958-11f1-a2a1-8fd8798eb057 2026-03-06T13:35:00.059 INFO:teuthology.orchestra.run.vm03.stdout: mon_host = [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] 2026-03-06T13:35:00.314 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:00 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/1576456457' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T13:35:00.316 INFO:tasks.cephadm:Distributing (final) config and client.admin keyring... 2026-03-06T13:35:00.316 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-06T13:35:00.316 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/ceph/ceph.conf 2026-03-06T13:35:00.339 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-06T13:35:00.339 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-06T13:35:00.406 INFO:tasks.cephadm:Adding mgr.a on vm03 2026-03-06T13:35:00.406 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b4d7b36a-1958-11f1-a2a1-8fd8798eb057 -- ceph orch apply mgr '1;vm03=a' 2026-03-06T13:35:00.772 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/mon.a/config 2026-03-06T13:35:01.137 INFO:teuthology.orchestra.run.vm03.stdout:Scheduled mgr update... 2026-03-06T13:35:01.328 INFO:tasks.cephadm:Deploying OSDs... 2026-03-06T13:35:01.328 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-06T13:35:01.328 DEBUG:teuthology.orchestra.run.vm03:> dd if=/scratch_devs of=/dev/stdout 2026-03-06T13:35:01.346 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-06T13:35:01.347 DEBUG:teuthology.orchestra.run.vm03:> ls /dev/[sv]d? 2026-03-06T13:35:01.404 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vda 2026-03-06T13:35:01.404 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vdb 2026-03-06T13:35:01.404 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vdc 2026-03-06T13:35:01.404 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vdd 2026-03-06T13:35:01.404 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vde 2026-03-06T13:35:01.404 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-06T13:35:01.404 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-06T13:35:01.404 DEBUG:teuthology.orchestra.run.vm03:> stat /dev/vdb 2026-03-06T13:35:01.464 INFO:teuthology.orchestra.run.vm03.stdout: File: /dev/vdb 2026-03-06T13:35:01.464 INFO:teuthology.orchestra.run.vm03.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-06T13:35:01.464 INFO:teuthology.orchestra.run.vm03.stdout:Device: 6h/6d Inode: 223 Links: 1 Device type: fc,10 2026-03-06T13:35:01.464 INFO:teuthology.orchestra.run.vm03.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-06T13:35:01.464 INFO:teuthology.orchestra.run.vm03.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-06T13:35:01.464 INFO:teuthology.orchestra.run.vm03.stdout:Access: 2026-03-06 13:34:54.904684951 +0100 2026-03-06T13:35:01.464 INFO:teuthology.orchestra.run.vm03.stdout:Modify: 2026-03-06 13:32:31.672733130 +0100 2026-03-06T13:35:01.464 INFO:teuthology.orchestra.run.vm03.stdout:Change: 2026-03-06 13:32:31.672733130 +0100 2026-03-06T13:35:01.464 INFO:teuthology.orchestra.run.vm03.stdout: Birth: 2026-03-06 13:29:58.276000000 +0100 2026-03-06T13:35:01.464 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-06T13:35:01.528 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:01 vm03 ceph-mon[50411]: from='client.14188 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "1;vm03=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T13:35:01.528 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:01 vm03 ceph-mon[50411]: Saving service mgr spec with placement vm03=a;count:1 2026-03-06T13:35:01.528 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:01 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:35:01.528 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:01 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T13:35:01.528 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:01 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T13:35:01.528 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:01 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T13:35:01.528 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:01 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:35:01.528 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:01 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:35:01.528 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:01 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.a", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-06T13:35:01.528 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:01 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-06T13:35:01.528 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:01 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T13:35:01.533 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records in 2026-03-06T13:35:01.533 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records out 2026-03-06T13:35:01.533 INFO:teuthology.orchestra.run.vm03.stderr:512 bytes copied, 0.000162495 s, 3.2 MB/s 2026-03-06T13:35:01.534 DEBUG:teuthology.orchestra.run.vm03:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-06T13:35:01.611 DEBUG:teuthology.orchestra.run.vm03:> stat /dev/vdc 2026-03-06T13:35:01.668 INFO:teuthology.orchestra.run.vm03.stdout: File: /dev/vdc 2026-03-06T13:35:01.668 INFO:teuthology.orchestra.run.vm03.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-06T13:35:01.668 INFO:teuthology.orchestra.run.vm03.stdout:Device: 6h/6d Inode: 235 Links: 1 Device type: fc,20 2026-03-06T13:35:01.668 INFO:teuthology.orchestra.run.vm03.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-06T13:35:01.668 INFO:teuthology.orchestra.run.vm03.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-06T13:35:01.668 INFO:teuthology.orchestra.run.vm03.stdout:Access: 2026-03-06 13:34:54.932684968 +0100 2026-03-06T13:35:01.668 INFO:teuthology.orchestra.run.vm03.stdout:Modify: 2026-03-06 13:32:31.674733125 +0100 2026-03-06T13:35:01.668 INFO:teuthology.orchestra.run.vm03.stdout:Change: 2026-03-06 13:32:31.674733125 +0100 2026-03-06T13:35:01.668 INFO:teuthology.orchestra.run.vm03.stdout: Birth: 2026-03-06 13:29:58.281000000 +0100 2026-03-06T13:35:01.668 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-06T13:35:01.733 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records in 2026-03-06T13:35:01.733 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records out 2026-03-06T13:35:01.734 INFO:teuthology.orchestra.run.vm03.stderr:512 bytes copied, 0.00018706 s, 2.7 MB/s 2026-03-06T13:35:01.735 DEBUG:teuthology.orchestra.run.vm03:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-06T13:35:01.795 DEBUG:teuthology.orchestra.run.vm03:> stat /dev/vdd 2026-03-06T13:35:01.852 INFO:teuthology.orchestra.run.vm03.stdout: File: /dev/vdd 2026-03-06T13:35:01.852 INFO:teuthology.orchestra.run.vm03.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-06T13:35:01.852 INFO:teuthology.orchestra.run.vm03.stdout:Device: 6h/6d Inode: 254 Links: 1 Device type: fc,30 2026-03-06T13:35:01.852 INFO:teuthology.orchestra.run.vm03.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-06T13:35:01.852 INFO:teuthology.orchestra.run.vm03.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-06T13:35:01.852 INFO:teuthology.orchestra.run.vm03.stdout:Access: 2026-03-06 13:34:54.961684986 +0100 2026-03-06T13:35:01.852 INFO:teuthology.orchestra.run.vm03.stdout:Modify: 2026-03-06 13:32:31.664733148 +0100 2026-03-06T13:35:01.852 INFO:teuthology.orchestra.run.vm03.stdout:Change: 2026-03-06 13:32:31.664733148 +0100 2026-03-06T13:35:01.852 INFO:teuthology.orchestra.run.vm03.stdout: Birth: 2026-03-06 13:29:58.285000000 +0100 2026-03-06T13:35:01.852 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-06T13:35:01.916 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records in 2026-03-06T13:35:01.916 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records out 2026-03-06T13:35:01.916 INFO:teuthology.orchestra.run.vm03.stderr:512 bytes copied, 9.4657e-05 s, 5.4 MB/s 2026-03-06T13:35:01.917 DEBUG:teuthology.orchestra.run.vm03:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-06T13:35:01.973 DEBUG:teuthology.orchestra.run.vm03:> stat /dev/vde 2026-03-06T13:35:02.030 INFO:teuthology.orchestra.run.vm03.stdout: File: /dev/vde 2026-03-06T13:35:02.030 INFO:teuthology.orchestra.run.vm03.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-06T13:35:02.030 INFO:teuthology.orchestra.run.vm03.stdout:Device: 6h/6d Inode: 257 Links: 1 Device type: fc,40 2026-03-06T13:35:02.030 INFO:teuthology.orchestra.run.vm03.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-06T13:35:02.030 INFO:teuthology.orchestra.run.vm03.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-06T13:35:02.030 INFO:teuthology.orchestra.run.vm03.stdout:Access: 2026-03-06 13:34:55.005685013 +0100 2026-03-06T13:35:02.030 INFO:teuthology.orchestra.run.vm03.stdout:Modify: 2026-03-06 13:32:31.658733162 +0100 2026-03-06T13:35:02.030 INFO:teuthology.orchestra.run.vm03.stdout:Change: 2026-03-06 13:32:31.658733162 +0100 2026-03-06T13:35:02.030 INFO:teuthology.orchestra.run.vm03.stdout: Birth: 2026-03-06 13:29:58.290000000 +0100 2026-03-06T13:35:02.030 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-06T13:35:02.095 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records in 2026-03-06T13:35:02.095 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records out 2026-03-06T13:35:02.095 INFO:teuthology.orchestra.run.vm03.stderr:512 bytes copied, 0.000298618 s, 1.7 MB/s 2026-03-06T13:35:02.096 DEBUG:teuthology.orchestra.run.vm03:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-06T13:35:02.152 INFO:tasks.cephadm:Deploying osd.0 on vm03 with /dev/vde... 2026-03-06T13:35:02.152 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3 ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b4d7b36a-1958-11f1-a2a1-8fd8798eb057 -- lvm zap /dev/vde 2026-03-06T13:35:02.513 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/mon.a/config 2026-03-06T13:35:02.794 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:02 vm03 ceph-mon[50411]: Reconfiguring mgr.a (unknown last config time)... 2026-03-06T13:35:02.794 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:02 vm03 ceph-mon[50411]: Reconfiguring daemon mgr.a on vm03 2026-03-06T13:35:02.795 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:02 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:35:02.795 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:02 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:35:03.841 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:35:03.861 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b4d7b36a-1958-11f1-a2a1-8fd8798eb057 -- ceph orch daemon add osd vm03:/dev/vde 2026-03-06T13:35:04.195 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/mon.a/config 2026-03-06T13:35:04.887 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:04 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-06T13:35:04.887 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:04 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-06T13:35:04.887 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:04 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T13:35:05.927 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:05 vm03 ceph-mon[50411]: from='client.14190 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T13:35:07.110 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:06 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/1189906999' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "314c4c77-2809-4001-a1fe-5031b74f6cd2"}]: dispatch 2026-03-06T13:35:07.110 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:06 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/1189906999' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "314c4c77-2809-4001-a1fe-5031b74f6cd2"}]': finished 2026-03-06T13:35:07.110 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:06 vm03 ceph-mon[50411]: osdmap e5: 1 total, 0 up, 1 in 2026-03-06T13:35:07.110 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:06 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T13:35:07.110 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:06 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/1581152027' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-06T13:35:11.033 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:11 vm03 ceph-mon[50411]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T13:35:11.033 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:11 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-06T13:35:11.033 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:11 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T13:35:12.064 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:12 vm03 ceph-mon[50411]: Deploying daemon osd.0 on vm03 2026-03-06T13:35:13.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:13 vm03 ceph-mon[50411]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T13:35:15.609 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:15 vm03 ceph-mon[50411]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T13:35:15.609 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:15 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T13:35:15.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:15 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:35:15.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:15 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:35:16.272 INFO:teuthology.orchestra.run.vm03.stdout:Created osd(s) 0 on host 'vm03' 2026-03-06T13:35:16.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:16 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:35:16.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:16 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:35:16.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:16 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T13:35:16.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:16 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T13:35:16.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:16 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:35:16.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:16 vm03 ceph-mon[50411]: from='osd.0 [v2:192.168.123.103:6802/3036627600,v1:192.168.123.103:6803/3036627600]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-06T13:35:16.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:16 vm03 ceph-mon[50411]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T13:35:16.435 DEBUG:teuthology.orchestra.run.vm03:osd.0> sudo journalctl -f -n 0 -u ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@osd.0.service 2026-03-06T13:35:16.436 INFO:tasks.cephadm:Deploying osd.1 on vm03 with /dev/vdd... 2026-03-06T13:35:16.436 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3 ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b4d7b36a-1958-11f1-a2a1-8fd8798eb057 -- lvm zap /dev/vdd 2026-03-06T13:35:16.939 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/mon.a/config 2026-03-06T13:35:17.247 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:17 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T13:35:17.247 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:17 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:35:17.247 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:17 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:35:17.247 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:17 vm03 ceph-mon[50411]: from='osd.0 [v2:192.168.123.103:6802/3036627600,v1:192.168.123.103:6803/3036627600]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-06T13:35:17.247 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:17 vm03 ceph-mon[50411]: osdmap e6: 1 total, 0 up, 1 in 2026-03-06T13:35:17.247 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:17 vm03 ceph-mon[50411]: from='osd.0 [v2:192.168.123.103:6802/3036627600,v1:192.168.123.103:6803/3036627600]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-06T13:35:17.247 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:17 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T13:35:18.435 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:18 vm03 ceph-mon[50411]: purged_snaps scrub starts 2026-03-06T13:35:18.435 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:18 vm03 ceph-mon[50411]: purged_snaps scrub ok 2026-03-06T13:35:18.435 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:18 vm03 ceph-mon[50411]: from='osd.0 [v2:192.168.123.103:6802/3036627600,v1:192.168.123.103:6803/3036627600]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-06T13:35:18.435 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:18 vm03 ceph-mon[50411]: osdmap e7: 1 total, 0 up, 1 in 2026-03-06T13:35:18.768 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:18 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T13:35:18.768 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:18 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T13:35:18.768 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:18 vm03 ceph-mon[50411]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T13:35:18.768 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:18 vm03 ceph-mon[50411]: Detected new or changed devices on vm03 2026-03-06T13:35:18.768 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:18 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:35:18.768 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:18 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:35:18.768 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:18 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-06T13:35:18.768 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:18 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T13:35:18.768 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:18 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T13:35:18.768 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:18 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:35:18.768 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:35:18 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0[60965]: 2026-03-06T12:35:18.467+0000 7fbe94a8b640 -1 osd.0 0 waiting for initial osdmap 2026-03-06T13:35:18.768 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:35:18 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0[60965]: 2026-03-06T12:35:18.473+0000 7fbe8f8a1640 -1 osd.0 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-06T13:35:19.333 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:35:19.350 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b4d7b36a-1958-11f1-a2a1-8fd8798eb057 -- ceph orch daemon add osd vm03:/dev/vdd 2026-03-06T13:35:19.609 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:19 vm03 ceph-mon[50411]: Adjusting osd_memory_target on vm03 to 257.0M 2026-03-06T13:35:19.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:19 vm03 ceph-mon[50411]: Unable to set osd_memory_target on vm03 to 269530726: error parsing value: Value '269530726' is below minimum 939524096 2026-03-06T13:35:19.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:19 vm03 ceph-mon[50411]: from='osd.0 [v2:192.168.123.103:6802/3036627600,v1:192.168.123.103:6803/3036627600]' entity='osd.0' 2026-03-06T13:35:19.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:19 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T13:35:19.671 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/mon.a/config 2026-03-06T13:35:20.622 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:20 vm03 ceph-mon[50411]: osd.0 [v2:192.168.123.103:6802/3036627600,v1:192.168.123.103:6803/3036627600] boot 2026-03-06T13:35:20.622 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:20 vm03 ceph-mon[50411]: osdmap e8: 1 total, 1 up, 1 in 2026-03-06T13:35:20.622 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:20 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T13:35:20.622 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:20 vm03 ceph-mon[50411]: from='client.14199 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T13:35:20.622 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:20 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-06T13:35:20.622 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:20 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-06T13:35:20.622 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:20 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T13:35:20.622 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:20 vm03 ceph-mon[50411]: pgmap v13: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-06T13:35:21.842 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:21 vm03 ceph-mon[50411]: osdmap e9: 1 total, 1 up, 1 in 2026-03-06T13:35:21.842 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:21 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/2817063921' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "dcfd0b5e-f0e8-4d27-9ba3-77494068f199"}]: dispatch 2026-03-06T13:35:21.842 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:21 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/2817063921' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "dcfd0b5e-f0e8-4d27-9ba3-77494068f199"}]': finished 2026-03-06T13:35:21.842 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:21 vm03 ceph-mon[50411]: osdmap e10: 2 total, 1 up, 2 in 2026-03-06T13:35:21.842 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:21 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T13:35:22.859 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:22 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/4041323939' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-06T13:35:22.859 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:22 vm03 ceph-mon[50411]: pgmap v16: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-06T13:35:25.110 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:25 vm03 ceph-mon[50411]: pgmap v17: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-06T13:35:27.274 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:27 vm03 ceph-mon[50411]: pgmap v18: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-06T13:35:27.274 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:27 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-06T13:35:27.274 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:27 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T13:35:28.287 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:28 vm03 ceph-mon[50411]: Deploying daemon osd.1 on vm03 2026-03-06T13:35:28.287 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:28 vm03 ceph-mon[50411]: pgmap v19: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-06T13:35:29.860 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:29 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T13:35:29.860 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:29 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:35:29.860 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:29 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:35:30.860 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:30 vm03 ceph-mon[50411]: pgmap v20: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-06T13:35:30.860 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:30 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:35:30.860 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:30 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:35:30.860 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:30 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T13:35:30.860 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:30 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T13:35:30.860 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:30 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:35:31.852 INFO:teuthology.orchestra.run.vm03.stdout:Created osd(s) 1 on host 'vm03' 2026-03-06T13:35:31.991 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:31 vm03 ceph-mon[50411]: from='osd.1 [v2:192.168.123.103:6810/3227865652,v1:192.168.123.103:6811/3227865652]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-06T13:35:32.021 DEBUG:teuthology.orchestra.run.vm03:osd.1> sudo journalctl -f -n 0 -u ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@osd.1.service 2026-03-06T13:35:32.023 INFO:tasks.cephadm:Deploying osd.2 on vm03 with /dev/vdc... 2026-03-06T13:35:32.023 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3 ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b4d7b36a-1958-11f1-a2a1-8fd8798eb057 -- lvm zap /dev/vdc 2026-03-06T13:35:32.498 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/mon.a/config 2026-03-06T13:35:32.797 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:32 vm03 ceph-mon[50411]: from='osd.1 [v2:192.168.123.103:6810/3227865652,v1:192.168.123.103:6811/3227865652]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-06T13:35:32.797 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:32 vm03 ceph-mon[50411]: osdmap e11: 2 total, 1 up, 2 in 2026-03-06T13:35:32.797 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:32 vm03 ceph-mon[50411]: from='osd.1 [v2:192.168.123.103:6810/3227865652,v1:192.168.123.103:6811/3227865652]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-06T13:35:32.797 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:32 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T13:35:32.797 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:32 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T13:35:32.797 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:32 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:35:32.797 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:32 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:35:32.797 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:32 vm03 ceph-mon[50411]: pgmap v22: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-06T13:35:33.860 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:33 vm03 ceph-mon[50411]: from='osd.1 [v2:192.168.123.103:6810/3227865652,v1:192.168.123.103:6811/3227865652]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-06T13:35:33.860 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:33 vm03 ceph-mon[50411]: osdmap e12: 2 total, 1 up, 2 in 2026-03-06T13:35:33.860 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:33 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T13:35:33.860 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:33 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T13:35:33.860 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:33 vm03 ceph-mon[50411]: from='osd.1 [v2:192.168.123.103:6810/3227865652,v1:192.168.123.103:6811/3227865652]' entity='osd.1' 2026-03-06T13:35:33.860 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:35:33 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1[67040]: 2026-03-06T12:35:33.680+0000 7efe57972640 -1 osd.1 0 waiting for initial osdmap 2026-03-06T13:35:33.860 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:35:33 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1[67040]: 2026-03-06T12:35:33.687+0000 7efe52788640 -1 osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-06T13:35:35.006 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:35:35.024 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b4d7b36a-1958-11f1-a2a1-8fd8798eb057 -- ceph orch daemon add osd vm03:/dev/vdc 2026-03-06T13:35:35.051 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:34 vm03 ceph-mon[50411]: purged_snaps scrub starts 2026-03-06T13:35:35.051 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:34 vm03 ceph-mon[50411]: purged_snaps scrub ok 2026-03-06T13:35:35.051 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:34 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T13:35:35.051 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:34 vm03 ceph-mon[50411]: Detected new or changed devices on vm03 2026-03-06T13:35:35.051 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:34 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:35:35.051 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:34 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:35:35.051 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:34 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-06T13:35:35.051 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:34 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-06T13:35:35.051 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:34 vm03 ceph-mon[50411]: Adjusting osd_memory_target on vm03 to 128.5M 2026-03-06T13:35:35.051 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:34 vm03 ceph-mon[50411]: Unable to set osd_memory_target on vm03 to 134765363: error parsing value: Value '134765363' is below minimum 939524096 2026-03-06T13:35:35.051 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:34 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T13:35:35.051 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:34 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T13:35:35.051 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:34 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:35:35.052 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:34 vm03 ceph-mon[50411]: pgmap v24: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-06T13:35:35.359 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/mon.a/config 2026-03-06T13:35:35.775 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:35 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T13:35:35.776 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:35 vm03 ceph-mon[50411]: osd.1 [v2:192.168.123.103:6810/3227865652,v1:192.168.123.103:6811/3227865652] boot 2026-03-06T13:35:35.776 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:35 vm03 ceph-mon[50411]: osdmap e13: 2 total, 2 up, 2 in 2026-03-06T13:35:35.776 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:35 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T13:35:36.109 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:35 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-06T13:35:36.110 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:35 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-06T13:35:36.110 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:35 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T13:35:37.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:37 vm03 ceph-mon[50411]: from='client.14208 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T13:35:37.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:37 vm03 ceph-mon[50411]: osdmap e14: 2 total, 2 up, 2 in 2026-03-06T13:35:37.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:37 vm03 ceph-mon[50411]: pgmap v27: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-06T13:35:38.006 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:38 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/3592168567' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "236345e0-86d2-4671-a2c3-ba26e1d204fd"}]: dispatch 2026-03-06T13:35:38.006 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:38 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/3592168567' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "236345e0-86d2-4671-a2c3-ba26e1d204fd"}]': finished 2026-03-06T13:35:38.006 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:38 vm03 ceph-mon[50411]: osdmap e15: 3 total, 2 up, 3 in 2026-03-06T13:35:38.006 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:38 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T13:35:38.006 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:38 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/3324389200' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-06T13:35:39.359 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:39 vm03 ceph-mon[50411]: pgmap v29: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-06T13:35:41.088 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:41 vm03 ceph-mon[50411]: pgmap v30: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-06T13:35:43.110 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:43 vm03 ceph-mon[50411]: pgmap v31: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-06T13:35:43.110 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:43 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-06T13:35:43.110 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:43 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T13:35:43.110 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:43 vm03 ceph-mon[50411]: Deploying daemon osd.2 on vm03 2026-03-06T13:35:45.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:45 vm03 ceph-mon[50411]: pgmap v32: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-06T13:35:46.264 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:46 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T13:35:46.264 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:46 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:35:46.264 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:46 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:35:47.099 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:47 vm03 ceph-mon[50411]: pgmap v33: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-06T13:35:47.099 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:47 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:35:47.099 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:47 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:35:47.099 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:47 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T13:35:47.099 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:47 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T13:35:47.099 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:47 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:35:47.586 INFO:teuthology.orchestra.run.vm03.stdout:Created osd(s) 2 on host 'vm03' 2026-03-06T13:35:47.765 DEBUG:teuthology.orchestra.run.vm03:osd.2> sudo journalctl -f -n 0 -u ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@osd.2.service 2026-03-06T13:35:47.808 INFO:tasks.cephadm:Waiting for 3 OSDs to come up... 2026-03-06T13:35:47.808 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b4d7b36a-1958-11f1-a2a1-8fd8798eb057 -- ceph osd stat -f json 2026-03-06T13:35:48.248 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:48 vm03 ceph-mon[50411]: from='osd.2 [v2:192.168.123.103:6818/3463899297,v1:192.168.123.103:6819/3463899297]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-06T13:35:48.248 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:48 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T13:35:48.248 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:48 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:35:48.248 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:48 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:35:48.299 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/mon.a/config 2026-03-06T13:35:48.682 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:35:48.877 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":16,"num_osds":3,"num_up_osds":2,"osd_up_since":1772800534,"num_in_osds":3,"osd_in_since":1772800537,"num_remapped_pgs":0} 2026-03-06T13:35:49.492 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:49 vm03 ceph-mon[50411]: pgmap v34: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-06T13:35:49.492 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:49 vm03 ceph-mon[50411]: from='osd.2 [v2:192.168.123.103:6818/3463899297,v1:192.168.123.103:6819/3463899297]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-06T13:35:49.493 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:49 vm03 ceph-mon[50411]: osdmap e16: 3 total, 2 up, 3 in 2026-03-06T13:35:49.493 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:49 vm03 ceph-mon[50411]: from='osd.2 [v2:192.168.123.103:6818/3463899297,v1:192.168.123.103:6819/3463899297]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-06T13:35:49.493 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:49 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T13:35:49.493 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:49 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/3981814683' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-06T13:35:49.877 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b4d7b36a-1958-11f1-a2a1-8fd8798eb057 -- ceph osd stat -f json 2026-03-06T13:35:50.247 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/mon.a/config 2026-03-06T13:35:50.359 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:35:50 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2[73177]: 2026-03-06T12:35:50.029+0000 7f17531a5640 -1 osd.2 0 waiting for initial osdmap 2026-03-06T13:35:50.360 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:35:50 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2[73177]: 2026-03-06T12:35:50.039+0000 7f174e7ce640 -1 osd.2 17 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-06T13:35:50.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:50 vm03 ceph-mon[50411]: from='osd.2 [v2:192.168.123.103:6818/3463899297,v1:192.168.123.103:6819/3463899297]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-06T13:35:50.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:50 vm03 ceph-mon[50411]: osdmap e17: 3 total, 2 up, 3 in 2026-03-06T13:35:50.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:50 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T13:35:50.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:50 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T13:35:50.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:50 vm03 ceph-mon[50411]: Detected new or changed devices on vm03 2026-03-06T13:35:50.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:50 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:35:50.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:50 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:35:50.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:50 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-06T13:35:50.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:50 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-06T13:35:50.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:50 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-06T13:35:50.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:50 vm03 ceph-mon[50411]: Adjusting osd_memory_target on vm03 to 87737k 2026-03-06T13:35:50.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:50 vm03 ceph-mon[50411]: Unable to set osd_memory_target on vm03 to 89843575: error parsing value: Value '89843575' is below minimum 939524096 2026-03-06T13:35:50.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:50 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T13:35:50.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:50 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T13:35:50.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:50 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:35:50.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:50 vm03 ceph-mon[50411]: from='osd.2 [v2:192.168.123.103:6818/3463899297,v1:192.168.123.103:6819/3463899297]' entity='osd.2' 2026-03-06T13:35:50.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:50 vm03 ceph-mon[50411]: pgmap v37: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-06T13:35:50.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:50 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T13:35:50.738 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:35:50.934 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":17,"num_osds":3,"num_up_osds":2,"osd_up_since":1772800534,"num_in_osds":3,"osd_in_since":1772800537,"num_remapped_pgs":0} 2026-03-06T13:35:51.609 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:51 vm03 ceph-mon[50411]: purged_snaps scrub starts 2026-03-06T13:35:51.609 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:51 vm03 ceph-mon[50411]: purged_snaps scrub ok 2026-03-06T13:35:51.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:51 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/3188353205' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-06T13:35:51.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:51 vm03 ceph-mon[50411]: osd.2 [v2:192.168.123.103:6818/3463899297,v1:192.168.123.103:6819/3463899297] boot 2026-03-06T13:35:51.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:51 vm03 ceph-mon[50411]: osdmap e18: 3 total, 3 up, 3 in 2026-03-06T13:35:51.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:51 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T13:35:51.934 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b4d7b36a-1958-11f1-a2a1-8fd8798eb057 -- ceph osd stat -f json 2026-03-06T13:35:52.283 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/mon.a/config 2026-03-06T13:35:52.359 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:52 vm03 ceph-mon[50411]: pgmap v39: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-06T13:35:52.361 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:52 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-06T13:35:52.707 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:35:52.890 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":19,"num_osds":3,"num_up_osds":3,"osd_up_since":1772800551,"num_in_osds":3,"osd_in_since":1772800537,"num_remapped_pgs":0} 2026-03-06T13:35:52.890 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3 shell --fsid b4d7b36a-1958-11f1-a2a1-8fd8798eb057 -- ceph osd dump --format=json 2026-03-06T13:35:53.268 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/mon.a/config 2026-03-06T13:35:53.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:53 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-06T13:35:53.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:53 vm03 ceph-mon[50411]: osdmap e19: 3 total, 3 up, 3 in 2026-03-06T13:35:53.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:53 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-06T13:35:53.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:53 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/3561572828' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-06T13:35:53.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:53 vm03 sudo[77804]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda 2026-03-06T13:35:53.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:53 vm03 sudo[77804]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-06T13:35:53.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:53 vm03 sudo[77804]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-06T13:35:53.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:53 vm03 sudo[77804]: pam_unix(sudo:session): session closed for user root 2026-03-06T13:35:53.610 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:35:53 vm03 sudo[77766]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vde 2026-03-06T13:35:53.610 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:35:53 vm03 sudo[77766]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-06T13:35:53.610 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:35:53 vm03 sudo[77766]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-06T13:35:53.610 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:35:53 vm03 sudo[77766]: pam_unix(sudo:session): session closed for user root 2026-03-06T13:35:53.610 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:35:53 vm03 sudo[77785]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vdd 2026-03-06T13:35:53.610 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:35:53 vm03 sudo[77785]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-06T13:35:53.610 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:35:53 vm03 sudo[77785]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-06T13:35:53.610 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:35:53 vm03 sudo[77785]: pam_unix(sudo:session): session closed for user root 2026-03-06T13:35:53.610 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:35:53 vm03 sudo[77795]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vdc 2026-03-06T13:35:53.610 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:35:53 vm03 sudo[77795]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-06T13:35:53.610 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:35:53 vm03 sudo[77795]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-06T13:35:53.610 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:35:53 vm03 sudo[77795]: pam_unix(sudo:session): session closed for user root 2026-03-06T13:35:53.666 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:35:53.666 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":20,"fsid":"b4d7b36a-1958-11f1-a2a1-8fd8798eb057","created":"2026-03-06T12:34:03.119494+0000","modified":"2026-03-06T12:35:53.146168+0000","last_up_change":"2026-03-06T12:35:51.044694+0000","last_in_change":"2026-03-06T12:35:37.167807+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":8,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":3,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-06T12:35:52.065245+0000","flags":32769,"flags_names":"hashpspool,creating","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"20","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":3,"score_stable":3,"optimal_score":1,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"314c4c77-2809-4001-a1fe-5031b74f6cd2","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6802","nonce":3036627600},{"type":"v1","addr":"192.168.123.103:6803","nonce":3036627600}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6804","nonce":3036627600},{"type":"v1","addr":"192.168.123.103:6805","nonce":3036627600}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6808","nonce":3036627600},{"type":"v1","addr":"192.168.123.103:6809","nonce":3036627600}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6806","nonce":3036627600},{"type":"v1","addr":"192.168.123.103:6807","nonce":3036627600}]},"public_addr":"192.168.123.103:6803/3036627600","cluster_addr":"192.168.123.103:6805/3036627600","heartbeat_back_addr":"192.168.123.103:6809/3036627600","heartbeat_front_addr":"192.168.123.103:6807/3036627600","state":["exists","up"]},{"osd":1,"uuid":"dcfd0b5e-f0e8-4d27-9ba3-77494068f199","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":19,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6810","nonce":3227865652},{"type":"v1","addr":"192.168.123.103:6811","nonce":3227865652}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6812","nonce":3227865652},{"type":"v1","addr":"192.168.123.103:6813","nonce":3227865652}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6816","nonce":3227865652},{"type":"v1","addr":"192.168.123.103:6817","nonce":3227865652}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6814","nonce":3227865652},{"type":"v1","addr":"192.168.123.103:6815","nonce":3227865652}]},"public_addr":"192.168.123.103:6811/3227865652","cluster_addr":"192.168.123.103:6813/3227865652","heartbeat_back_addr":"192.168.123.103:6817/3227865652","heartbeat_front_addr":"192.168.123.103:6815/3227865652","state":["exists","up"]},{"osd":2,"uuid":"236345e0-86d2-4671-a2c3-ba26e1d204fd","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6818","nonce":3463899297},{"type":"v1","addr":"192.168.123.103:6819","nonce":3463899297}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6820","nonce":3463899297},{"type":"v1","addr":"192.168.123.103:6821","nonce":3463899297}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6824","nonce":3463899297},{"type":"v1","addr":"192.168.123.103:6825","nonce":3463899297}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6822","nonce":3463899297},{"type":"v1","addr":"192.168.123.103:6823","nonce":3463899297}]},"public_addr":"192.168.123.103:6819/3463899297","cluster_addr":"192.168.123.103:6821/3463899297","heartbeat_back_addr":"192.168.123.103:6825/3463899297","heartbeat_front_addr":"192.168.123.103:6823/3463899297","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-06T12:35:16.314031+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-06T12:35:32.309469+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-06T12:35:48.507250+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.103:0/904056298":"2026-03-07T12:34:50.013742+0000","192.168.123.103:6801/1904667030":"2026-03-07T12:34:50.013742+0000","192.168.123.103:6800/1904667030":"2026-03-07T12:34:50.013742+0000","192.168.123.103:6801/1452692436":"2026-03-07T12:34:30.442185+0000","192.168.123.103:6800/1452692436":"2026-03-07T12:34:30.442185+0000","192.168.123.103:0/266500963":"2026-03-07T12:34:30.442185+0000","192.168.123.103:0/1296108930":"2026-03-07T12:34:50.013742+0000","192.168.123.103:0/1720102912":"2026-03-07T12:34:50.013742+0000","192.168.123.103:0/2926978165":"2026-03-07T12:34:30.442185+0000","192.168.123.103:0/1707621727":"2026-03-07T12:34:30.442185+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-06T13:35:53.852 INFO:tasks.cephadm.ceph_manager.ceph:[{'pool': 1, 'pool_name': '.mgr', 'create_time': '2026-03-06T12:35:52.065245+0000', 'flags': 32769, 'flags_names': 'hashpspool,creating', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'is_stretch_pool': False, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '20', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_type': 'Fair distribution', 'score_acting': 3, 'score_stable': 3, 'optimal_score': 1, 'raw_score_acting': 3, 'raw_score_stable': 3, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}] 2026-03-06T13:35:53.852 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3 shell --fsid b4d7b36a-1958-11f1-a2a1-8fd8798eb057 -- ceph osd pool get .mgr pg_num 2026-03-06T13:35:54.227 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/mon.a/config 2026-03-06T13:35:54.574 INFO:teuthology.orchestra.run.vm03.stdout:pg_num: 1 2026-03-06T13:35:54.574 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:54 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-06T13:35:54.574 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:54 vm03 ceph-mon[50411]: osdmap e20: 3 total, 3 up, 3 in 2026-03-06T13:35:54.574 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:54 vm03 ceph-mon[50411]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-06T13:35:54.574 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:54 vm03 ceph-mon[50411]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-06T13:35:54.574 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:54 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-06T13:35:54.574 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:54 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/2736430293' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-06T13:35:54.574 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:54 vm03 ceph-mon[50411]: pgmap v42: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-06T13:35:54.759 INFO:tasks.cephadm:Setting up client nodes... 2026-03-06T13:35:54.759 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b4d7b36a-1958-11f1-a2a1-8fd8798eb057 -- ceph auth get-or-create client.0 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-06T13:35:55.104 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/mon.a/config 2026-03-06T13:35:55.550 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:55 vm03 ceph-mon[50411]: mgrmap e14: a(active, since 64s) 2026-03-06T13:35:55.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:55 vm03 ceph-mon[50411]: osdmap e21: 3 total, 3 up, 3 in 2026-03-06T13:35:55.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:55 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/3297602330' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-06T13:35:55.551 INFO:teuthology.orchestra.run.vm03.stdout:[client.0] 2026-03-06T13:35:55.551 INFO:teuthology.orchestra.run.vm03.stdout: key = AQAryqppXZ+BHRAAL1nsAgSItcRTXRBCpfgQGA== 2026-03-06T13:35:55.709 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-06T13:35:55.709 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/ceph/ceph.client.0.keyring 2026-03-06T13:35:55.709 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod 0644 /etc/ceph/ceph.client.0.keyring 2026-03-06T13:35:55.744 INFO:tasks.ceph:Waiting until ceph daemons up and pgs clean... 2026-03-06T13:35:55.744 INFO:tasks.cephadm.ceph_manager.ceph:waiting for mgr available 2026-03-06T13:35:55.744 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3 shell --fsid b4d7b36a-1958-11f1-a2a1-8fd8798eb057 -- ceph mgr dump --format=json 2026-03-06T13:35:56.120 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/mon.a/config 2026-03-06T13:35:56.492 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:56 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/2435509381' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-06T13:35:56.492 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:56 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/2435509381' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-06T13:35:56.492 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:56 vm03 ceph-mon[50411]: pgmap v44: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail 2026-03-06T13:35:56.492 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:35:56.698 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":14,"flags":0,"active_gid":14156,"active_name":"a","active_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6800","nonce":3438818958},{"type":"v1","addr":"192.168.123.103:6801","nonce":3438818958}]},"active_addr":"192.168.123.103:6801/3438818958","active_change":"2026-03-06T12:34:50.014006+0000","active_mgr_features":4540701547738038271,"available":true,"standbys":[],"modules":["cephadm","dashboard","iostat","nfs","restful"],"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}],"services":{"dashboard":"https://192.168.123.103:8443/"},"always_on_modules":{"octopus":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"pacific":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"quincy":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"reef":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"squid":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"]},"force_disabled_modules":{},"last_failure_osd_epoch":3,"active_clients":[{"name":"devicehealth","addrvec":[{"type":"v2","addr":"192.168.123.103:0","nonce":876011779}]},{"name":"libcephsqlite","addrvec":[{"type":"v2","addr":"192.168.123.103:0","nonce":4097104359}]},{"name":"rbd_support","addrvec":[{"type":"v2","addr":"192.168.123.103:0","nonce":2342978994}]},{"name":"volumes","addrvec":[{"type":"v2","addr":"192.168.123.103:0","nonce":1370471994}]}]} 2026-03-06T13:35:56.698 INFO:tasks.cephadm.ceph_manager.ceph:mgr available! 2026-03-06T13:35:56.698 INFO:tasks.cephadm.ceph_manager.ceph:waiting for all up 2026-03-06T13:35:56.698 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3 shell --fsid b4d7b36a-1958-11f1-a2a1-8fd8798eb057 -- ceph osd dump --format=json 2026-03-06T13:35:57.035 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/mon.a/config 2026-03-06T13:35:57.359 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:57 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/3466800814' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-06T13:35:57.419 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:35:57.419 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":21,"fsid":"b4d7b36a-1958-11f1-a2a1-8fd8798eb057","created":"2026-03-06T12:34:03.119494+0000","modified":"2026-03-06T12:35:54.154894+0000","last_up_change":"2026-03-06T12:35:51.044694+0000","last_in_change":"2026-03-06T12:35:37.167807+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":8,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":3,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-06T12:35:52.065245+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"21","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":3,"score_stable":3,"optimal_score":1,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"314c4c77-2809-4001-a1fe-5031b74f6cd2","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6802","nonce":3036627600},{"type":"v1","addr":"192.168.123.103:6803","nonce":3036627600}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6804","nonce":3036627600},{"type":"v1","addr":"192.168.123.103:6805","nonce":3036627600}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6808","nonce":3036627600},{"type":"v1","addr":"192.168.123.103:6809","nonce":3036627600}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6806","nonce":3036627600},{"type":"v1","addr":"192.168.123.103:6807","nonce":3036627600}]},"public_addr":"192.168.123.103:6803/3036627600","cluster_addr":"192.168.123.103:6805/3036627600","heartbeat_back_addr":"192.168.123.103:6809/3036627600","heartbeat_front_addr":"192.168.123.103:6807/3036627600","state":["exists","up"]},{"osd":1,"uuid":"dcfd0b5e-f0e8-4d27-9ba3-77494068f199","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":19,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6810","nonce":3227865652},{"type":"v1","addr":"192.168.123.103:6811","nonce":3227865652}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6812","nonce":3227865652},{"type":"v1","addr":"192.168.123.103:6813","nonce":3227865652}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6816","nonce":3227865652},{"type":"v1","addr":"192.168.123.103:6817","nonce":3227865652}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6814","nonce":3227865652},{"type":"v1","addr":"192.168.123.103:6815","nonce":3227865652}]},"public_addr":"192.168.123.103:6811/3227865652","cluster_addr":"192.168.123.103:6813/3227865652","heartbeat_back_addr":"192.168.123.103:6817/3227865652","heartbeat_front_addr":"192.168.123.103:6815/3227865652","state":["exists","up"]},{"osd":2,"uuid":"236345e0-86d2-4671-a2c3-ba26e1d204fd","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6818","nonce":3463899297},{"type":"v1","addr":"192.168.123.103:6819","nonce":3463899297}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6820","nonce":3463899297},{"type":"v1","addr":"192.168.123.103:6821","nonce":3463899297}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6824","nonce":3463899297},{"type":"v1","addr":"192.168.123.103:6825","nonce":3463899297}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6822","nonce":3463899297},{"type":"v1","addr":"192.168.123.103:6823","nonce":3463899297}]},"public_addr":"192.168.123.103:6819/3463899297","cluster_addr":"192.168.123.103:6821/3463899297","heartbeat_back_addr":"192.168.123.103:6825/3463899297","heartbeat_front_addr":"192.168.123.103:6823/3463899297","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-06T12:35:16.314031+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-06T12:35:32.309469+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-06T12:35:48.507250+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.103:0/904056298":"2026-03-07T12:34:50.013742+0000","192.168.123.103:6801/1904667030":"2026-03-07T12:34:50.013742+0000","192.168.123.103:6800/1904667030":"2026-03-07T12:34:50.013742+0000","192.168.123.103:6801/1452692436":"2026-03-07T12:34:30.442185+0000","192.168.123.103:6800/1452692436":"2026-03-07T12:34:30.442185+0000","192.168.123.103:0/266500963":"2026-03-07T12:34:30.442185+0000","192.168.123.103:0/1296108930":"2026-03-07T12:34:50.013742+0000","192.168.123.103:0/1720102912":"2026-03-07T12:34:50.013742+0000","192.168.123.103:0/2926978165":"2026-03-07T12:34:30.442185+0000","192.168.123.103:0/1707621727":"2026-03-07T12:34:30.442185+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-06T13:35:57.670 INFO:tasks.cephadm.ceph_manager.ceph:all up! 2026-03-06T13:35:57.670 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3 shell --fsid b4d7b36a-1958-11f1-a2a1-8fd8798eb057 -- ceph osd dump --format=json 2026-03-06T13:35:58.005 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/mon.a/config 2026-03-06T13:35:58.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:58 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/1697855274' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-06T13:35:58.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:58 vm03 ceph-mon[50411]: pgmap v45: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-06T13:35:58.361 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:35:58.361 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":21,"fsid":"b4d7b36a-1958-11f1-a2a1-8fd8798eb057","created":"2026-03-06T12:34:03.119494+0000","modified":"2026-03-06T12:35:54.154894+0000","last_up_change":"2026-03-06T12:35:51.044694+0000","last_in_change":"2026-03-06T12:35:37.167807+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":8,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":3,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-06T12:35:52.065245+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"21","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":3,"score_stable":3,"optimal_score":1,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"314c4c77-2809-4001-a1fe-5031b74f6cd2","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6802","nonce":3036627600},{"type":"v1","addr":"192.168.123.103:6803","nonce":3036627600}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6804","nonce":3036627600},{"type":"v1","addr":"192.168.123.103:6805","nonce":3036627600}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6808","nonce":3036627600},{"type":"v1","addr":"192.168.123.103:6809","nonce":3036627600}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6806","nonce":3036627600},{"type":"v1","addr":"192.168.123.103:6807","nonce":3036627600}]},"public_addr":"192.168.123.103:6803/3036627600","cluster_addr":"192.168.123.103:6805/3036627600","heartbeat_back_addr":"192.168.123.103:6809/3036627600","heartbeat_front_addr":"192.168.123.103:6807/3036627600","state":["exists","up"]},{"osd":1,"uuid":"dcfd0b5e-f0e8-4d27-9ba3-77494068f199","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":19,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6810","nonce":3227865652},{"type":"v1","addr":"192.168.123.103:6811","nonce":3227865652}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6812","nonce":3227865652},{"type":"v1","addr":"192.168.123.103:6813","nonce":3227865652}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6816","nonce":3227865652},{"type":"v1","addr":"192.168.123.103:6817","nonce":3227865652}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6814","nonce":3227865652},{"type":"v1","addr":"192.168.123.103:6815","nonce":3227865652}]},"public_addr":"192.168.123.103:6811/3227865652","cluster_addr":"192.168.123.103:6813/3227865652","heartbeat_back_addr":"192.168.123.103:6817/3227865652","heartbeat_front_addr":"192.168.123.103:6815/3227865652","state":["exists","up"]},{"osd":2,"uuid":"236345e0-86d2-4671-a2c3-ba26e1d204fd","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6818","nonce":3463899297},{"type":"v1","addr":"192.168.123.103:6819","nonce":3463899297}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6820","nonce":3463899297},{"type":"v1","addr":"192.168.123.103:6821","nonce":3463899297}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6824","nonce":3463899297},{"type":"v1","addr":"192.168.123.103:6825","nonce":3463899297}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6822","nonce":3463899297},{"type":"v1","addr":"192.168.123.103:6823","nonce":3463899297}]},"public_addr":"192.168.123.103:6819/3463899297","cluster_addr":"192.168.123.103:6821/3463899297","heartbeat_back_addr":"192.168.123.103:6825/3463899297","heartbeat_front_addr":"192.168.123.103:6823/3463899297","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-06T12:35:16.314031+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-06T12:35:32.309469+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-06T12:35:48.507250+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.103:0/904056298":"2026-03-07T12:34:50.013742+0000","192.168.123.103:6801/1904667030":"2026-03-07T12:34:50.013742+0000","192.168.123.103:6800/1904667030":"2026-03-07T12:34:50.013742+0000","192.168.123.103:6801/1452692436":"2026-03-07T12:34:30.442185+0000","192.168.123.103:6800/1452692436":"2026-03-07T12:34:30.442185+0000","192.168.123.103:0/266500963":"2026-03-07T12:34:30.442185+0000","192.168.123.103:0/1296108930":"2026-03-07T12:34:50.013742+0000","192.168.123.103:0/1720102912":"2026-03-07T12:34:50.013742+0000","192.168.123.103:0/2926978165":"2026-03-07T12:34:30.442185+0000","192.168.123.103:0/1707621727":"2026-03-07T12:34:30.442185+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-06T13:35:58.543 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3 shell --fsid b4d7b36a-1958-11f1-a2a1-8fd8798eb057 -- ceph tell osd.0 flush_pg_stats 2026-03-06T13:35:58.543 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3 shell --fsid b4d7b36a-1958-11f1-a2a1-8fd8798eb057 -- ceph tell osd.1 flush_pg_stats 2026-03-06T13:35:58.543 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3 shell --fsid b4d7b36a-1958-11f1-a2a1-8fd8798eb057 -- ceph tell osd.2 flush_pg_stats 2026-03-06T13:35:59.026 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/mon.a/config 2026-03-06T13:35:59.057 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/mon.a/config 2026-03-06T13:35:59.281 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/mon.a/config 2026-03-06T13:35:59.328 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:35:59 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/3659012167' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-06T13:35:59.666 INFO:teuthology.orchestra.run.vm03.stdout:34359738377 2026-03-06T13:35:59.666 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3 shell --fsid b4d7b36a-1958-11f1-a2a1-8fd8798eb057 -- ceph osd last-stat-seq osd.0 2026-03-06T13:35:59.676 INFO:teuthology.orchestra.run.vm03.stdout:77309411331 2026-03-06T13:35:59.676 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3 shell --fsid b4d7b36a-1958-11f1-a2a1-8fd8798eb057 -- ceph osd last-stat-seq osd.2 2026-03-06T13:35:59.843 INFO:teuthology.orchestra.run.vm03.stdout:55834574855 2026-03-06T13:35:59.843 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3 shell --fsid b4d7b36a-1958-11f1-a2a1-8fd8798eb057 -- ceph osd last-stat-seq osd.1 2026-03-06T13:36:00.072 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/mon.a/config 2026-03-06T13:36:00.266 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/mon.a/config 2026-03-06T13:36:00.335 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:00 vm03 ceph-mon[50411]: pgmap v46: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-06T13:36:00.555 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/mon.a/config 2026-03-06T13:36:00.561 INFO:teuthology.orchestra.run.vm03.stdout:34359738378 2026-03-06T13:36:00.770 INFO:tasks.cephadm.ceph_manager.ceph:need seq 34359738377 got 34359738378 for osd.0 2026-03-06T13:36:00.770 DEBUG:teuthology.parallel:result is None 2026-03-06T13:36:00.803 INFO:teuthology.orchestra.run.vm03.stdout:77309411331 2026-03-06T13:36:00.962 INFO:teuthology.orchestra.run.vm03.stdout:55834574855 2026-03-06T13:36:00.967 INFO:tasks.cephadm.ceph_manager.ceph:need seq 77309411331 got 77309411331 for osd.2 2026-03-06T13:36:00.967 DEBUG:teuthology.parallel:result is None 2026-03-06T13:36:01.135 INFO:tasks.cephadm.ceph_manager.ceph:need seq 55834574855 got 55834574855 for osd.1 2026-03-06T13:36:01.135 DEBUG:teuthology.parallel:result is None 2026-03-06T13:36:01.135 INFO:tasks.cephadm.ceph_manager.ceph:waiting for clean 2026-03-06T13:36:01.135 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3 shell --fsid b4d7b36a-1958-11f1-a2a1-8fd8798eb057 -- ceph pg dump --format=json 2026-03-06T13:36:01.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:01 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/2251423422' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-06T13:36:01.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:01 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/178429864' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-06T13:36:01.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:01 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/1106760487' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-06T13:36:01.473 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/mon.a/config 2026-03-06T13:36:01.825 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:36:01.825 INFO:teuthology.orchestra.run.vm03.stderr:dumped all 2026-03-06T13:36:02.008 INFO:teuthology.orchestra.run.vm03.stdout:{"pg_ready":true,"pg_map":{"version":46,"stamp":"2026-03-06T12:36:00.035137+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":3,"num_osds":3,"num_per_pool_osds":3,"num_per_pool_omap_osds":3,"kb":62902272,"kb_used":82796,"kb_used_data":1884,"kb_used_omap":4,"kb_used_meta":80443,"kb_avail":62819476,"statfs":{"total":64411926528,"available":64327143424,"internally_reserved":0,"allocated":1929216,"data_stored":1535472,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":4770,"internal_metadata":82373982},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"4.000537"},"pg_stats":[{"pgid":"1.0","version":"20'32","reported_seq":57,"reported_epoch":21,"state":"active+clean","last_fresh":"2026-03-06T12:35:54.161185+0000","last_change":"2026-03-06T12:35:53.224428+0000","last_active":"2026-03-06T12:35:54.161185+0000","last_peered":"2026-03-06T12:35:54.161185+0000","last_clean":"2026-03-06T12:35:54.161185+0000","last_became_active":"2026-03-06T12:35:53.224306+0000","last_became_peered":"2026-03-06T12:35:53.224306+0000","last_unstale":"2026-03-06T12:35:54.161185+0000","last_undegraded":"2026-03-06T12:35:54.161185+0000","last_fullsized":"2026-03-06T12:35:54.161185+0000","mapping_epoch":19,"log_start":"0'0","ondisk_log_start":"0'0","created":19,"last_epoch_clean":20,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-06T12:35:52.142541+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-06T12:35:52.142541+0000","last_clean_scrub_stamp":"2026-03-06T12:35:52.142541+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-07T22:55:40.501297+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0,2],"acting":[1,0,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]}],"pool_stats":[{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":3}],"osd_stats":[{"osd":2,"up_from":18,"seq":77309411331,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27604,"kb_used_data":628,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939820,"statfs":{"total":21470642176,"available":21442375680,"internally_reserved":0,"allocated":643072,"data_stored":511824,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":13,"seq":55834574855,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27596,"kb_used_data":628,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939828,"statfs":{"total":21470642176,"available":21442383872,"internally_reserved":0,"allocated":643072,"data_stored":511824,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":8,"seq":34359738378,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27596,"kb_used_data":628,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939828,"statfs":{"total":21470642176,"available":21442383872,"internally_reserved":0,"allocated":643072,"data_stored":511824,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[1,2],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-06T13:36:02.009 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3 shell --fsid b4d7b36a-1958-11f1-a2a1-8fd8798eb057 -- ceph pg dump --format=json 2026-03-06T13:36:02.356 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/mon.a/config 2026-03-06T13:36:02.437 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:02 vm03 ceph-mon[50411]: from='client.14250 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T13:36:02.437 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:02 vm03 ceph-mon[50411]: pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-06T13:36:02.700 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:36:02.701 INFO:teuthology.orchestra.run.vm03.stderr:dumped all 2026-03-06T13:36:02.884 INFO:teuthology.orchestra.run.vm03.stdout:{"pg_ready":true,"pg_map":{"version":47,"stamp":"2026-03-06T12:36:02.035449+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":3,"num_osds":3,"num_per_pool_osds":3,"num_per_pool_omap_osds":3,"kb":62902272,"kb_used":82796,"kb_used_data":1884,"kb_used_omap":4,"kb_used_meta":80443,"kb_avail":62819476,"statfs":{"total":64411926528,"available":64327143424,"internally_reserved":0,"allocated":1929216,"data_stored":1535472,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":4770,"internal_metadata":82373982},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"6.000849"},"pg_stats":[{"pgid":"1.0","version":"20'32","reported_seq":57,"reported_epoch":21,"state":"active+clean","last_fresh":"2026-03-06T12:35:54.161185+0000","last_change":"2026-03-06T12:35:53.224428+0000","last_active":"2026-03-06T12:35:54.161185+0000","last_peered":"2026-03-06T12:35:54.161185+0000","last_clean":"2026-03-06T12:35:54.161185+0000","last_became_active":"2026-03-06T12:35:53.224306+0000","last_became_peered":"2026-03-06T12:35:53.224306+0000","last_unstale":"2026-03-06T12:35:54.161185+0000","last_undegraded":"2026-03-06T12:35:54.161185+0000","last_fullsized":"2026-03-06T12:35:54.161185+0000","mapping_epoch":19,"log_start":"0'0","ondisk_log_start":"0'0","created":19,"last_epoch_clean":20,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-06T12:35:52.142541+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-06T12:35:52.142541+0000","last_clean_scrub_stamp":"2026-03-06T12:35:52.142541+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-07T22:55:40.501297+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0,2],"acting":[1,0,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]}],"pool_stats":[{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":3}],"osd_stats":[{"osd":2,"up_from":18,"seq":77309411331,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27604,"kb_used_data":628,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939820,"statfs":{"total":21470642176,"available":21442375680,"internally_reserved":0,"allocated":643072,"data_stored":511824,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":13,"seq":55834574856,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27596,"kb_used_data":628,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939828,"statfs":{"total":21470642176,"available":21442383872,"internally_reserved":0,"allocated":643072,"data_stored":511824,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":8,"seq":34359738378,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27596,"kb_used_data":628,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939828,"statfs":{"total":21470642176,"available":21442383872,"internally_reserved":0,"allocated":643072,"data_stored":511824,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[1,2],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-06T13:36:02.884 INFO:tasks.cephadm.ceph_manager.ceph:clean! 2026-03-06T13:36:02.884 INFO:tasks.ceph:Waiting until ceph cluster ceph is healthy... 2026-03-06T13:36:02.884 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy 2026-03-06T13:36:02.884 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3 shell --fsid b4d7b36a-1958-11f1-a2a1-8fd8798eb057 -- ceph health --format=json 2026-03-06T13:36:03.232 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/mon.a/config 2026-03-06T13:36:03.602 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:36:03.603 INFO:teuthology.orchestra.run.vm03.stdout:{"status":"HEALTH_OK","checks":{},"mutes":[]} 2026-03-06T13:36:03.603 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:03 vm03 ceph-mon[50411]: from='client.14252 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T13:36:03.785 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy done 2026-03-06T13:36:03.785 INFO:tasks.cephadm:Setup complete, yielding 2026-03-06T13:36:03.785 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-06T13:36:03.787 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm03.local 2026-03-06T13:36:03.787 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b4d7b36a-1958-11f1-a2a1-8fd8798eb057 -- bash -c 'ceph osd pool create foo' 2026-03-06T13:36:04.119 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/mon.a/config 2026-03-06T13:36:04.609 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:04 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/2658037786' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-06T13:36:04.609 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:04 vm03 ceph-mon[50411]: pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-06T13:36:05.189 INFO:teuthology.orchestra.run.vm03.stderr:pool 'foo' created 2026-03-06T13:36:05.344 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b4d7b36a-1958-11f1-a2a1-8fd8798eb057 -- bash -c 'rbd pool init foo' 2026-03-06T13:36:05.609 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:05 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/3086497849' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "foo"}]: dispatch 2026-03-06T13:36:05.659 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/mon.a/config 2026-03-06T13:36:06.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:06 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/3086497849' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "foo"}]': finished 2026-03-06T13:36:06.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:06 vm03 ceph-mon[50411]: osdmap e22: 3 total, 3 up, 3 in 2026-03-06T13:36:06.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:06 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/418786163' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "foo","app": "rbd"}]: dispatch 2026-03-06T13:36:06.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:06 vm03 ceph-mon[50411]: pgmap v50: 33 pgs: 11 creating+peering, 1 active+clean, 21 unknown; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-06T13:36:07.609 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:07 vm03 ceph-mon[50411]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-06T13:36:07.609 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:07 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/418786163' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "foo","app": "rbd"}]': finished 2026-03-06T13:36:07.609 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:07 vm03 ceph-mon[50411]: osdmap e23: 3 total, 3 up, 3 in 2026-03-06T13:36:08.364 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b4d7b36a-1958-11f1-a2a1-8fd8798eb057 -- bash -c 'ceph orch apply iscsi foo u p' 2026-03-06T13:36:08.609 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:08 vm03 ceph-mon[50411]: osdmap e24: 3 total, 3 up, 3 in 2026-03-06T13:36:08.609 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:08 vm03 ceph-mon[50411]: pgmap v53: 33 pgs: 11 creating+peering, 10 active+clean, 12 unknown; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-06T13:36:08.684 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/mon.a/config 2026-03-06T13:36:09.031 INFO:teuthology.orchestra.run.vm03.stdout:Scheduled iscsi.foo update... 2026-03-06T13:36:09.197 INFO:teuthology.run_tasks:Running task workunit... 2026-03-06T13:36:09.200 INFO:tasks.workunit:Pulling workunits from ref 5726a36c3452e5b72190cfceba828abc62c819b7 2026-03-06T13:36:09.200 INFO:tasks.workunit:Making a separate scratch dir for every client... 2026-03-06T13:36:09.200 DEBUG:teuthology.orchestra.run.vm03:> stat -- /home/ubuntu/cephtest/mnt.0 2026-03-06T13:36:09.221 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-06T13:36:09.221 INFO:teuthology.orchestra.run.vm03.stderr:stat: cannot statx '/home/ubuntu/cephtest/mnt.0': No such file or directory 2026-03-06T13:36:09.221 DEBUG:teuthology.orchestra.run.vm03:> mkdir -- /home/ubuntu/cephtest/mnt.0 2026-03-06T13:36:09.279 INFO:tasks.workunit:Created dir /home/ubuntu/cephtest/mnt.0 2026-03-06T13:36:09.279 DEBUG:teuthology.orchestra.run.vm03:> cd -- /home/ubuntu/cephtest/mnt.0 && mkdir -- client.0 2026-03-06T13:36:09.339 INFO:tasks.workunit:timeout=3h 2026-03-06T13:36:09.339 INFO:tasks.workunit:cleanup=True 2026-03-06T13:36:09.339 DEBUG:teuthology.orchestra.run.vm03:> rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone https://github.com/kshtsk/ceph.git /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0 && git checkout 5726a36c3452e5b72190cfceba828abc62c819b7 2026-03-06T13:36:09.359 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:09 vm03 ceph-mon[50411]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-06T13:36:09.359 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:09 vm03 ceph-mon[50411]: Cluster is now healthy 2026-03-06T13:36:09.359 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:09 vm03 ceph-mon[50411]: osdmap e25: 3 total, 3 up, 3 in 2026-03-06T13:36:09.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:09 vm03 ceph-mon[50411]: from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "foo", "api_user": "u", "api_password": "p", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T13:36:09.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:09 vm03 ceph-mon[50411]: Saving service iscsi.foo spec with placement count:1 2026-03-06T13:36:09.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:09 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:36:09.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:09 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T13:36:09.391 INFO:tasks.workunit.client.0.vm03.stderr:Cloning into '/home/ubuntu/cephtest/clone.client.0'... 2026-03-06T13:36:10.473 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:10 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T13:36:10.473 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:10 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T13:36:10.473 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:10 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:36:10.473 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:10 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm03.ncatkq", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-06T13:36:10.473 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:10 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm03.ncatkq", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-06T13:36:10.473 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:10 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T13:36:10.473 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:10 vm03 ceph-mon[50411]: Deploying daemon iscsi.foo.vm03.ncatkq on vm03 2026-03-06T13:36:10.473 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:10 vm03 ceph-mon[50411]: pgmap v55: 33 pgs: 11 creating+peering, 22 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-06T13:36:11.818 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:11 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:36:11.819 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:11 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:36:11.819 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:11 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:36:11.819 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:11 vm03 ceph-mon[50411]: Checking pool "foo" exists for service iscsi.foo 2026-03-06T13:36:11.819 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:11 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:36:11.819 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:11 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T13:36:12.595 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:12 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/1529945451' entity='client.iscsi.foo.vm03.ncatkq' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-06T13:36:12.595 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:12 vm03 ceph-mon[50411]: pgmap v56: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-06T13:36:12.595 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:12 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/526996390' entity='client.iscsi.foo.vm03.ncatkq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/904056298"}]: dispatch 2026-03-06T13:36:12.595 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:12 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:36:12.595 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:12 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:36:12.595 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:12 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T13:36:12.595 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:12 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T13:36:12.595 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:12 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:36:12.595 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:12 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-06T13:36:12.595 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:12 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-06T13:36:12.595 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:12 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:36:12.595 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:12 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm03"}]: dispatch 2026-03-06T13:36:12.595 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:12 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:36:12.596 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:12 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T13:36:12.596 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:12 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T13:36:12.596 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:12 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T13:36:12.596 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:12 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:36:13.859 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:13 vm03 ceph-mon[50411]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-06T13:36:13.860 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:13 vm03 ceph-mon[50411]: Adding iSCSI gateway http://:@192.168.123.103:5000 to Dashboard 2026-03-06T13:36:13.860 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:13 vm03 ceph-mon[50411]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-06T13:36:13.860 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:13 vm03 ceph-mon[50411]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm03"}]: dispatch 2026-03-06T13:36:13.860 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:13 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/526996390' entity='client.iscsi.foo.vm03.ncatkq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/904056298"}]': finished 2026-03-06T13:36:13.860 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:13 vm03 ceph-mon[50411]: osdmap e26: 3 total, 3 up, 3 in 2026-03-06T13:36:13.860 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:13 vm03 ceph-mon[50411]: mgrmap e15: a(active, since 82s) 2026-03-06T13:36:13.860 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:13 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/126859895' entity='client.iscsi.foo.vm03.ncatkq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6801/1904667030"}]: dispatch 2026-03-06T13:36:14.862 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:14 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/126859895' entity='client.iscsi.foo.vm03.ncatkq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6801/1904667030"}]': finished 2026-03-06T13:36:14.862 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:14 vm03 ceph-mon[50411]: osdmap e27: 3 total, 3 up, 3 in 2026-03-06T13:36:14.862 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:14 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/2640944590' entity='client.iscsi.foo.vm03.ncatkq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6800/1904667030"}]: dispatch 2026-03-06T13:36:14.862 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:14 vm03 ceph-mon[50411]: pgmap v59: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 511 B/s wr, 1 op/s 2026-03-06T13:36:15.859 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:15 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/2640944590' entity='client.iscsi.foo.vm03.ncatkq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6800/1904667030"}]': finished 2026-03-06T13:36:15.860 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:15 vm03 ceph-mon[50411]: osdmap e28: 3 total, 3 up, 3 in 2026-03-06T13:36:15.860 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:15 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/750357362' entity='client.iscsi.foo.vm03.ncatkq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6801/1452692436"}]: dispatch 2026-03-06T13:36:15.860 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:15 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/750357362' entity='client.iscsi.foo.vm03.ncatkq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6801/1452692436"}]': finished 2026-03-06T13:36:15.860 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:15 vm03 ceph-mon[50411]: osdmap e29: 3 total, 3 up, 3 in 2026-03-06T13:36:15.860 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:15 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:36:15.860 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:15 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/930295740' entity='client.iscsi.foo.vm03.ncatkq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6800/1452692436"}]: dispatch 2026-03-06T13:36:17.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:17 vm03 ceph-mon[50411]: pgmap v62: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 767 B/s wr, 2 op/s 2026-03-06T13:36:17.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:17 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/930295740' entity='client.iscsi.foo.vm03.ncatkq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6800/1452692436"}]': finished 2026-03-06T13:36:17.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:17 vm03 ceph-mon[50411]: osdmap e30: 3 total, 3 up, 3 in 2026-03-06T13:36:17.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:17 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/2818653234' entity='client.iscsi.foo.vm03.ncatkq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/266500963"}]: dispatch 2026-03-06T13:36:18.359 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:18 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/2818653234' entity='client.iscsi.foo.vm03.ncatkq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/266500963"}]': finished 2026-03-06T13:36:18.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:18 vm03 ceph-mon[50411]: osdmap e31: 3 total, 3 up, 3 in 2026-03-06T13:36:18.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:18 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/1419479956' entity='client.iscsi.foo.vm03.ncatkq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/1296108930"}]: dispatch 2026-03-06T13:36:19.359 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:19 vm03 ceph-mon[50411]: pgmap v65: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-06T13:36:19.359 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:19 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/1419479956' entity='client.iscsi.foo.vm03.ncatkq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/1296108930"}]': finished 2026-03-06T13:36:19.359 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:19 vm03 ceph-mon[50411]: osdmap e32: 3 total, 3 up, 3 in 2026-03-06T13:36:19.359 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:19 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/3721667379' entity='client.iscsi.foo.vm03.ncatkq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/1720102912"}]: dispatch 2026-03-06T13:36:20.359 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:20 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/3721667379' entity='client.iscsi.foo.vm03.ncatkq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/1720102912"}]': finished 2026-03-06T13:36:20.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:20 vm03 ceph-mon[50411]: osdmap e33: 3 total, 3 up, 3 in 2026-03-06T13:36:20.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:20 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/3592322141' entity='client.iscsi.foo.vm03.ncatkq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/2926978165"}]: dispatch 2026-03-06T13:36:20.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:20 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' 2026-03-06T13:36:21.860 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:21 vm03 ceph-mon[50411]: pgmap v68: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-06T13:36:21.860 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:21 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/3592322141' entity='client.iscsi.foo.vm03.ncatkq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/2926978165"}]': finished 2026-03-06T13:36:21.860 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:21 vm03 ceph-mon[50411]: osdmap e34: 3 total, 3 up, 3 in 2026-03-06T13:36:21.860 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:21 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/2732017362' entity='client.iscsi.foo.vm03.ncatkq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/1707621727"}]: dispatch 2026-03-06T13:36:22.859 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:22 vm03 ceph-mon[50411]: from='client.14266 -' entity='client.iscsi.foo.vm03.ncatkq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-06T13:36:22.860 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:22 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/2732017362' entity='client.iscsi.foo.vm03.ncatkq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/1707621727"}]': finished 2026-03-06T13:36:22.860 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:22 vm03 ceph-mon[50411]: osdmap e35: 3 total, 3 up, 3 in 2026-03-06T13:36:22.860 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:22 vm03 ceph-mon[50411]: pgmap v71: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-06T13:36:25.359 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:25 vm03 ceph-mon[50411]: pgmap v72: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 861 B/s rd, 0 op/s 2026-03-06T13:36:27.359 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:27 vm03 ceph-mon[50411]: pgmap v73: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 737 B/s rd, 0 op/s 2026-03-06T13:36:29.359 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:29 vm03 ceph-mon[50411]: pgmap v74: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-06T13:36:31.359 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:31 vm03 ceph-mon[50411]: pgmap v75: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-06T13:36:33.359 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:33 vm03 ceph-mon[50411]: from='client.14266 -' entity='client.iscsi.foo.vm03.ncatkq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-06T13:36:33.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:33 vm03 ceph-mon[50411]: pgmap v76: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 980 B/s rd, 0 op/s 2026-03-06T13:36:35.359 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:35 vm03 ceph-mon[50411]: pgmap v77: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-06T13:36:37.359 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:37 vm03 ceph-mon[50411]: pgmap v78: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 0 op/s 2026-03-06T13:36:39.609 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:39 vm03 ceph-mon[50411]: pgmap v79: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-06T13:36:41.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:41 vm03 ceph-mon[50411]: pgmap v80: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 0 op/s 2026-03-06T13:36:42.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:42 vm03 ceph-mon[50411]: from='client.14266 -' entity='client.iscsi.foo.vm03.ncatkq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-06T13:36:42.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:42 vm03 ceph-mon[50411]: pgmap v81: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 0 op/s 2026-03-06T13:36:45.359 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:45 vm03 ceph-mon[50411]: pgmap v82: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-06T13:36:47.359 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:47 vm03 ceph-mon[50411]: pgmap v83: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 0 op/s 2026-03-06T13:36:49.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:49 vm03 ceph-mon[50411]: pgmap v84: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-06T13:36:51.566 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:51 vm03 ceph-mon[50411]: pgmap v85: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 0 op/s 2026-03-06T13:36:52.314 INFO:tasks.workunit.client.0.vm03.stderr:Note: switching to '5726a36c3452e5b72190cfceba828abc62c819b7'. 2026-03-06T13:36:52.314 INFO:tasks.workunit.client.0.vm03.stderr: 2026-03-06T13:36:52.314 INFO:tasks.workunit.client.0.vm03.stderr:You are in 'detached HEAD' state. You can look around, make experimental 2026-03-06T13:36:52.314 INFO:tasks.workunit.client.0.vm03.stderr:changes and commit them, and you can discard any commits you make in this 2026-03-06T13:36:52.314 INFO:tasks.workunit.client.0.vm03.stderr:state without impacting any branches by switching back to a branch. 2026-03-06T13:36:52.314 INFO:tasks.workunit.client.0.vm03.stderr: 2026-03-06T13:36:52.314 INFO:tasks.workunit.client.0.vm03.stderr:If you want to create a new branch to retain commits you create, you may 2026-03-06T13:36:52.314 INFO:tasks.workunit.client.0.vm03.stderr:do so (now or later) by using -c with the switch command. Example: 2026-03-06T13:36:52.314 INFO:tasks.workunit.client.0.vm03.stderr: 2026-03-06T13:36:52.314 INFO:tasks.workunit.client.0.vm03.stderr: git switch -c 2026-03-06T13:36:52.314 INFO:tasks.workunit.client.0.vm03.stderr: 2026-03-06T13:36:52.314 INFO:tasks.workunit.client.0.vm03.stderr:Or undo this operation with: 2026-03-06T13:36:52.314 INFO:tasks.workunit.client.0.vm03.stderr: 2026-03-06T13:36:52.314 INFO:tasks.workunit.client.0.vm03.stderr: git switch - 2026-03-06T13:36:52.314 INFO:tasks.workunit.client.0.vm03.stderr: 2026-03-06T13:36:52.314 INFO:tasks.workunit.client.0.vm03.stderr:Turn off this advice by setting config variable advice.detachedHead to false 2026-03-06T13:36:52.314 INFO:tasks.workunit.client.0.vm03.stderr: 2026-03-06T13:36:52.314 INFO:tasks.workunit.client.0.vm03.stderr:HEAD is now at 5726a36c345 qa/suites/orch/cephadm/osds: drop nvme_loop task 2026-03-06T13:36:52.320 DEBUG:teuthology.orchestra.run.vm03:> cd -- /home/ubuntu/cephtest/clone.client.0/qa/workunits && if test -e Makefile ; then make ; fi && find -executable -type f -printf '%P\0' >/home/ubuntu/cephtest/workunits.list.client.0 2026-03-06T13:36:52.376 INFO:tasks.workunit.client.0.vm03.stdout:for d in direct_io fs ; do ( cd $d ; make all ) ; done 2026-03-06T13:36:52.378 INFO:tasks.workunit.client.0.vm03.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/direct_io' 2026-03-06T13:36:52.378 INFO:tasks.workunit.client.0.vm03.stdout:cc -Wall -Wextra -D_GNU_SOURCE direct_io_test.c -o direct_io_test 2026-03-06T13:36:52.421 INFO:tasks.workunit.client.0.vm03.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_sync_io.c -o test_sync_io 2026-03-06T13:36:52.458 INFO:tasks.workunit.client.0.vm03.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_short_dio_read.c -o test_short_dio_read 2026-03-06T13:36:52.489 INFO:tasks.workunit.client.0.vm03.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/direct_io' 2026-03-06T13:36:52.490 INFO:tasks.workunit.client.0.vm03.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/fs' 2026-03-06T13:36:52.490 INFO:tasks.workunit.client.0.vm03.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_o_trunc.c -o test_o_trunc 2026-03-06T13:36:52.519 INFO:tasks.workunit.client.0.vm03.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/fs' 2026-03-06T13:36:52.523 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-06T13:36:52.523 DEBUG:teuthology.orchestra.run.vm03:> dd if=/home/ubuntu/cephtest/workunits.list.client.0 of=/dev/stdout 2026-03-06T13:36:52.581 INFO:tasks.workunit:Running workunits matching cephadm/test_iscsi_pids_limit.sh on client.0... 2026-03-06T13:36:52.581 INFO:tasks.workunit:Running workunit cephadm/test_iscsi_pids_limit.sh... 2026-03-06T13:36:52.582 DEBUG:teuthology.orchestra.run.vm03:workunit test cephadm/test_iscsi_pids_limit.sh> mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5726a36c3452e5b72190cfceba828abc62c819b7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_iscsi_pids_limit.sh 2026-03-06T13:36:52.609 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:52 vm03 ceph-mon[50411]: from='client.14266 -' entity='client.iscsi.foo.vm03.ncatkq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-06T13:36:52.609 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:52 vm03 ceph-mon[50411]: pgmap v86: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 0 op/s 2026-03-06T13:36:52.634 INFO:tasks.workunit.client.0.vm03.stderr:++ sudo podman ps -qa --filter=name=iscsi 2026-03-06T13:36:52.672 INFO:tasks.workunit.client.0.vm03.stderr:+ ISCSI_CONT_IDS='94685991a50d 2026-03-06T13:36:52.672 INFO:tasks.workunit.client.0.vm03.stderr:87cfee2c3e90' 2026-03-06T13:36:52.672 INFO:tasks.workunit.client.0.vm03.stderr:++ echo 94685991a50d 87cfee2c3e90 2026-03-06T13:36:52.672 INFO:tasks.workunit.client.0.vm03.stderr:++ wc -w 2026-03-06T13:36:52.674 INFO:tasks.workunit.client.0.vm03.stderr:+ CONT_COUNT=2 2026-03-06T13:36:52.674 INFO:tasks.workunit.client.0.vm03.stderr:+ test 2 -eq 2 2026-03-06T13:36:52.674 INFO:tasks.workunit.client.0.vm03.stderr:+ for i in ${ISCSI_CONT_IDS} 2026-03-06T13:36:52.674 INFO:tasks.workunit.client.0.vm03.stderr:++ sudo podman exec 94685991a50d cat /sys/fs/cgroup/pids/pids.max 2026-03-06T13:36:52.725 INFO:tasks.workunit.client.0.vm03.stderr:cat: /sys/fs/cgroup/pids/pids.max: No such file or directory 2026-03-06T13:36:52.777 INFO:tasks.workunit.client.0.vm03.stderr:+ '[' ']' 2026-03-06T13:36:52.777 INFO:tasks.workunit.client.0.vm03.stderr:++ sudo podman exec 94685991a50d cat /sys/fs/cgroup/pids.max 2026-03-06T13:36:52.875 INFO:tasks.workunit.client.0.vm03.stderr:+ '[' max ']' 2026-03-06T13:36:52.875 INFO:tasks.workunit.client.0.vm03.stderr:++ sudo podman exec 94685991a50d cat /sys/fs/cgroup/pids.max 2026-03-06T13:36:52.986 INFO:tasks.workunit.client.0.vm03.stderr:+ pid_limit=max 2026-03-06T13:36:52.986 INFO:tasks.workunit.client.0.vm03.stderr:+ test max == max 2026-03-06T13:36:52.986 INFO:tasks.workunit.client.0.vm03.stderr:+ for i in ${ISCSI_CONT_IDS} 2026-03-06T13:36:52.986 INFO:tasks.workunit.client.0.vm03.stderr:++ sudo podman exec 87cfee2c3e90 cat /sys/fs/cgroup/pids/pids.max 2026-03-06T13:36:53.041 INFO:tasks.workunit.client.0.vm03.stderr:cat: /sys/fs/cgroup/pids/pids.max: No such file or directory 2026-03-06T13:36:53.097 INFO:tasks.workunit.client.0.vm03.stderr:+ '[' ']' 2026-03-06T13:36:53.098 INFO:tasks.workunit.client.0.vm03.stderr:++ sudo podman exec 87cfee2c3e90 cat /sys/fs/cgroup/pids.max 2026-03-06T13:36:53.201 INFO:tasks.workunit.client.0.vm03.stderr:+ '[' max ']' 2026-03-06T13:36:53.202 INFO:tasks.workunit.client.0.vm03.stderr:++ sudo podman exec 87cfee2c3e90 cat /sys/fs/cgroup/pids.max 2026-03-06T13:36:53.301 INFO:tasks.workunit.client.0.vm03.stderr:+ pid_limit=max 2026-03-06T13:36:53.301 INFO:tasks.workunit.client.0.vm03.stderr:+ test max == max 2026-03-06T13:36:53.301 INFO:tasks.workunit.client.0.vm03.stderr:+ for i in ${ISCSI_CONT_IDS} 2026-03-06T13:36:53.301 INFO:tasks.workunit.client.0.vm03.stderr:+ sudo podman exec 94685991a50d /bin/sh -c 'for j in {0..20000}; do sleep 300 & done' 2026-03-06T13:36:55.359 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:55 vm03 ceph-mon[50411]: pgmap v87: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-06T13:36:57.359 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:57 vm03 ceph-mon[50411]: pgmap v88: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 0 op/s 2026-03-06T13:36:59.359 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:36:59 vm03 ceph-mon[50411]: pgmap v89: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-06T13:37:01.578 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:37:01 vm03 ceph-mon[50411]: pgmap v90: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 0 op/s 2026-03-06T13:37:03.609 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:37:03 vm03 ceph-mon[50411]: from='client.14266 -' entity='client.iscsi.foo.vm03.ncatkq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-06T13:37:03.609 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:37:03 vm03 ceph-mon[50411]: pgmap v91: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 0 op/s 2026-03-06T13:37:04.609 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:37:04 vm03 ceph-mon[50411]: pgmap v92: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-06T13:37:04.698 INFO:tasks.workunit.client.0.vm03.stderr:+ for i in ${ISCSI_CONT_IDS} 2026-03-06T13:37:04.698 INFO:tasks.workunit.client.0.vm03.stderr:+ sudo podman exec 87cfee2c3e90 /bin/sh -c 'for j in {0..20000}; do sleep 300 & done' 2026-03-06T13:37:06.790 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:37:06 vm03 ceph-mon[50411]: pgmap v93: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 0 op/s 2026-03-06T13:37:11.996 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:37:09 vm03 ceph-mon[50411]: pgmap v94: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-06T13:37:47.928 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:37:45 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0[60965]: 2026-03-06T12:37:45.865+0000 7fbe910a4640 -1 osd.0 35 heartbeat_check: no reply from 192.168.123.103:6814 osd.1 since back 2026-03-06T12:37:15.378598+0000 front 2026-03-06T12:37:17.679678+0000 (oldest deadline 2026-03-06T12:37:31.267827+0000) 2026-03-06T13:37:47.928 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:37:45 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0[60965]: 2026-03-06T12:37:45.865+0000 7fbe910a4640 -1 osd.0 35 heartbeat_check: no reply from 192.168.123.103:6822 osd.2 since back 2026-03-06T12:37:09.583909+0000 front 2026-03-06T12:37:15.054881+0000 (oldest deadline 2026-03-06T12:37:31.267827+0000) 2026-03-06T13:37:48.124 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:37:47 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1[67040]: 2026-03-06T12:37:47.661+0000 7efe53f8b640 -1 osd.1 35 heartbeat_check: no reply from 192.168.123.103:6806 osd.0 since back 2026-03-06T12:37:20.217402+0000 front 2026-03-06T12:37:43.656143+0000 (oldest deadline 2026-03-06T12:37:35.566008+0000) 2026-03-06T13:37:49.386 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:37:48 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1[67040]: 2026-03-06T12:37:48.881+0000 7efe53f8b640 -1 osd.1 35 heartbeat_check: no reply from 192.168.123.103:6806 osd.0 since back 2026-03-06T12:37:20.217402+0000 front 2026-03-06T12:37:43.656143+0000 (oldest deadline 2026-03-06T12:37:35.566008+0000) 2026-03-06T13:37:51.895 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:37:51 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2[73177]: 2026-03-06T12:37:50.997+0000 7f174ffd1640 -1 osd.2 35 heartbeat_check: no reply from 192.168.123.103:6806 osd.0 since back 2026-03-06T12:37:43.367200+0000 front 2026-03-06T12:37:43.638663+0000 (oldest deadline 2026-03-06T12:37:49.292952+0000) 2026-03-06T13:37:54.145 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:37:53 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0[60965]: 2026-03-06T12:37:53.734+0000 7fbe910a4640 -1 osd.0 35 heartbeat_check: no reply from 192.168.123.103:6814 osd.1 since back 2026-03-06T12:37:15.378598+0000 front 2026-03-06T12:37:53.385409+0000 (oldest deadline 2026-03-06T12:37:31.267827+0000) 2026-03-06T13:37:55.395 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:37:54 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0[60965]: 2026-03-06T12:37:54.868+0000 7fbe910a4640 -1 osd.0 35 heartbeat_check: no reply from 192.168.123.103:6814 osd.1 since back 2026-03-06T12:37:15.378598+0000 front 2026-03-06T12:37:53.885871+0000 (oldest deadline 2026-03-06T12:37:31.267827+0000) 2026-03-06T13:37:55.395 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:37:55 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2[73177]: 2026-03-06T12:37:55.010+0000 7f174ffd1640 -1 osd.2 35 heartbeat_check: no reply from 192.168.123.103:6806 osd.0 since back 2026-03-06T12:37:54.483177+0000 front 2026-03-06T12:37:43.638663+0000 (oldest deadline 2026-03-06T12:37:49.292952+0000) 2026-03-06T13:37:56.159 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:37:55 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1[67040]: 2026-03-06T12:37:55.592+0000 7efe53f8b640 -1 osd.1 35 heartbeat_check: no reply from 192.168.123.103:6806 osd.0 since back 2026-03-06T12:37:20.217402+0000 front 2026-03-06T12:37:43.656143+0000 (oldest deadline 2026-03-06T12:37:35.566008+0000) 2026-03-06T13:38:00.023 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:37:59 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2[73177]: 2026-03-06T12:37:58.955+0000 7f174ffd1640 -1 osd.2 35 heartbeat_check: no reply from 192.168.123.103:6806 osd.0 since back 2026-03-06T12:37:55.657790+0000 front 2026-03-06T12:37:43.638663+0000 (oldest deadline 2026-03-06T12:37:49.292952+0000) 2026-03-06T13:38:03.398 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:38:02 vm03 ceph-mon[50411]: pgmap v95: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 819 B/s rd, 0 op/s 2026-03-06T13:38:03.398 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:38:02 vm03 ceph-mon[50411]: from='mgr.14156 192.168.123.103:0/271048196' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T13:38:05.498 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:38:05 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0[60965]: 2026-03-06T12:38:05.150+0000 7fbe910a4640 -1 osd.0 35 heartbeat_check: no reply from 192.168.123.103:6814 osd.1 since back 2026-03-06T12:37:56.619955+0000 front 2026-03-06T12:38:01.805738+0000 (oldest deadline 2026-03-06T12:37:59.828343+0000) 2026-03-06T13:38:05.498 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:38:05 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2[73177]: 2026-03-06T12:38:04.911+0000 7f174ffd1640 -1 osd.2 35 heartbeat_check: no reply from 192.168.123.103:6806 osd.0 since back 2026-03-06T12:38:04.869573+0000 front 2026-03-06T12:37:43.638663+0000 (oldest deadline 2026-03-06T12:37:49.292952+0000) 2026-03-06T13:38:12.616 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:38:10 vm03 systemd[1]: ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@mgr.a.service: A process of this unit has been killed by the OOM killer. 2026-03-06T13:38:13.003 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:38:12 vm03 ceph-mon[50411]: pgmap v96: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 666 B/s rd, 0 op/s 2026-03-06T13:38:13.003 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:38:12 vm03 ceph-mon[50411]: from='client.14266 -' entity='client.iscsi.foo.vm03.ncatkq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-06T13:38:13.003 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:38:12 vm03 ceph-mon[50411]: pgmap v97: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 377 B/s rd, 0 op/s 2026-03-06T13:38:13.003 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:38:12 vm03 ceph-mon[50411]: pgmap v98: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 146 B/s rd, 0 op/s 2026-03-06T13:38:13.003 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:38:12 vm03 ceph-mon[50411]: pgmap v99: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 145 B/s rd, 0 op/s 2026-03-06T13:38:13.003 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:38:12 vm03 ceph-mon[50411]: from='client.14266 -' entity='client.iscsi.foo.vm03.ncatkq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-06T13:38:13.003 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:38:12 vm03 ceph-mon[50411]: pgmap v100: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 19 B/s rd, 0 op/s 2026-03-06T13:38:13.003 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:38:12 vm03 ceph-mon[50411]: osd.1 reported failed by osd.0 2026-03-06T13:38:13.003 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:38:12 vm03 ceph-mon[50411]: osd.0 reported failed by osd.1 2026-03-06T13:38:13.003 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:38:12 vm03 ceph-mon[50411]: osd.2 reported failed by osd.0 2026-03-06T13:38:13.003 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:38:12 vm03 ceph-mon[50411]: osd.0 failure report canceled by osd.1 2026-03-06T13:38:13.003 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:38:12 vm03 ceph-mon[50411]: osd.2 failure report canceled by osd.0 2026-03-06T13:38:13.003 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:38:12 vm03 ceph-mon[50411]: osd.1 failure report canceled by osd.0 2026-03-06T13:38:13.003 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:38:12 vm03 ceph-mon[50411]: osd.1 reported failed by osd.0 2026-03-06T13:38:13.003 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:38:12 vm03 ceph-mon[50411]: osd.1 failure report canceled by osd.0 2026-03-06T13:38:13.522 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:38:13 vm03 podman[104381]: 2026-03-06 13:38:13.266328776 +0100 CET m=+0.156102707 container died 1a2ab987f0731409ecae337fa89257e18b0fb184d162c250a9cc92e591c7ea3f (image=harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-06T13:38:13.523 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:38:13 vm03 podman[104381]: 2026-03-06 13:38:13.505902513 +0100 CET m=+0.395676444 container remove 1a2ab987f0731409ecae337fa89257e18b0fb184d162c250a9cc92e591c7ea3f (image=harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git) 2026-03-06T13:38:13.859 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:38:13 vm03 systemd[1]: ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@mgr.a.service: Main process exited, code=exited, status=137/n/a 2026-03-06T13:38:13.859 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:38:13 vm03 systemd[1]: ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@mgr.a.service: Failed with result 'exit-code'. 2026-03-06T13:38:13.859 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:38:13 vm03 systemd[1]: ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@mgr.a.service: Consumed 44.405s CPU time. 2026-03-06T13:38:25.085 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:38:23 vm03 systemd[1]: ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@mgr.a.service: Scheduled restart job, restart counter is at 1. 2026-03-06T13:38:25.086 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:38:23 vm03 systemd[1]: Stopped Ceph mgr.a for b4d7b36a-1958-11f1-a2a1-8fd8798eb057. 2026-03-06T13:38:25.086 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:38:23 vm03 systemd[1]: ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@mgr.a.service: Consumed 44.405s CPU time. 2026-03-06T13:38:25.086 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:38:23 vm03 systemd[1]: Starting Ceph mgr.a for b4d7b36a-1958-11f1-a2a1-8fd8798eb057... 2026-03-06T13:39:10.615 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:39:09 vm03 ceph-mon[50411]: Manager daemon a is unresponsive. No standby daemons available. 2026-03-06T13:39:15.628 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:39:10 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2[73177]: 2026-03-06T12:39:03.879+0000 7f174ffd1640 -1 osd.2 35 heartbeat_check: no reply from 192.168.123.103:6814 osd.1 since back 2026-03-06T12:38:28.687810+0000 front 2026-03-06T12:38:47.676399+0000 (oldest deadline 2026-03-06T12:38:52.211779+0000) 2026-03-06T13:39:15.917 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:39:12 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0[60965]: 2026-03-06T12:39:10.937+0000 7fbe910a4640 -1 osd.0 35 heartbeat_check: no reply from 192.168.123.103:6814 osd.1 since back 2026-03-06T12:38:47.769545+0000 front 2026-03-06T12:38:32.010959+0000 (oldest deadline 2026-03-06T12:39:02.369705+0000) 2026-03-06T13:39:15.917 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:39:15 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0[60965]: 2026-03-06T12:39:11.905+0000 7fbe910a4640 -1 osd.0 35 heartbeat_check: no reply from 192.168.123.103:6822 osd.2 since back 2026-03-06T12:38:42.372766+0000 front 2026-03-06T12:38:47.669300+0000 (oldest deadline 2026-03-06T12:39:02.369705+0000) 2026-03-06T13:39:22.427 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:39:21 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2[73177]: 2026-03-06T12:39:21.560+0000 7f174ffd1640 -1 osd.2 35 heartbeat_check: no reply from 192.168.123.103:6806 osd.0 since back 2026-03-06T12:38:47.669324+0000 front 2026-03-06T12:38:47.761651+0000 (oldest deadline 2026-03-06T12:39:16.195283+0000) 2026-03-06T13:39:22.427 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:39:21 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2[73177]: 2026-03-06T12:39:21.560+0000 7f174ffd1640 -1 osd.2 35 heartbeat_check: no reply from 192.168.123.103:6814 osd.1 since back 2026-03-06T12:38:28.687810+0000 front 2026-03-06T12:38:47.676399+0000 (oldest deadline 2026-03-06T12:38:52.211779+0000) 2026-03-06T13:39:34.555 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:39:34 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1[67040]: 2026-03-06T12:39:34.230+0000 7efe53f8b640 -1 osd.1 36 heartbeat_check: no reply from 192.168.123.103:6822 osd.2 since back 2026-03-06T12:38:59.859869+0000 front 2026-03-06T12:38:47.769556+0000 (oldest deadline 2026-03-06T12:39:16.212609+0000) 2026-03-06T13:39:34.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:39:34 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mon-a[50387]: 2026-03-06T12:39:34.291+0000 7f10b3552640 -1 mon.a@0(leader) e1 get_health_metrics reporting 1 slow ops, oldest is log(1 entries from seq 385 at 2026-03-06T12:39:02.057482+0000) 2026-03-06T13:39:34.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:39:34 vm03 ceph-mon[50411]: Health check failed: no active mgr (MGR_DOWN) 2026-03-06T13:39:34.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:39:34 vm03 ceph-mon[50411]: osdmap e36: 3 total, 3 up, 3 in 2026-03-06T13:39:34.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:39:34 vm03 ceph-mon[50411]: mgrmap e16: no daemons active (since 10s) 2026-03-06T13:39:34.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:39:34 vm03 ceph-mon[50411]: osd.1 reported failed by osd.0 2026-03-06T13:39:34.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:39:34 vm03 ceph-mon[50411]: osd.2 reported failed by osd.1 2026-03-06T13:39:34.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:39:34 vm03 ceph-mon[50411]: osd.2 reported failed by osd.0 2026-03-06T13:39:34.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:39:34 vm03 ceph-mon[50411]: osd.2 failed (root=default,host=vm03) (2 reporters from different osd after 46.050847 >= grace 20.000000) 2026-03-06T13:39:34.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:39:34 vm03 ceph-mon[50411]: osd.1 failure report canceled by osd.0 2026-03-06T13:39:34.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:39:34 vm03 ceph-mon[50411]: osd.1 reported failed by osd.2 2026-03-06T13:39:34.859 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:39:34 vm03 systemd[1]: ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@osd.2.service: A process of this unit has been killed by the OOM killer. 2026-03-06T13:39:42.506 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:39:42 vm03 ceph-mon[50411]: Health check failed: 1 osds down (OSD_DOWN) 2026-03-06T13:39:42.885 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:39:42 vm03 ceph-mon[50411]: osdmap e37: 3 total, 2 up, 3 in 2026-03-06T13:40:00.375 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:39:57 vm03 systemd[1]: ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@osd.0.service: A process of this unit has been killed by the OOM killer. 2026-03-06T13:40:01.625 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:40:01 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mon-a[50387]: 2026-03-06T12:40:01.252+0000 7f10b3552640 -1 mon.a@0(leader) e1 get_health_metrics reporting 1 slow ops, oldest is osd_failure(failed timeout osd.1 [v2:192.168.123.103:6810/3227865652,v1:192.168.123.103:6811/3227865652] for 41sec e35 v35) 2026-03-06T13:40:03.365 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:40:02 vm03 ceph-mon[50411]: osdmap e38: 3 total, 2 up, 3 in 2026-03-06T13:40:04.359 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:40:04 vm03 podman[106040]: 2026-03-06 13:40:04.020080387 +0100 CET m=+27.077935514 container died 53b9e18f399cb63ac3bf466dea26573261e0caa954c843ad7391ecd9b4ae242f (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-06T13:40:10.733 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:40:10 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mon-a[50387]: 2026-03-06T12:40:10.649+0000 7f10b3552640 -1 mon.a@0(leader) e1 get_health_metrics reporting 1 slow ops, oldest is osd_failure(failed timeout osd.1 [v2:192.168.123.103:6810/3227865652,v1:192.168.123.103:6811/3227865652] for 41sec e35 v35) 2026-03-06T13:40:10.734 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:40:10 vm03 ceph-mon[50411]: osd.0 reported immediately failed by osd.1 2026-03-06T13:40:10.734 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:40:10 vm03 ceph-mon[50411]: osd.0 failed (root=default,host=vm03) (connection refused reported by osd.1) 2026-03-06T13:40:10.734 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:40:10 vm03 ceph-mon[50411]: osd.0 reported immediately failed by osd.1 2026-03-06T13:40:10.734 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:40:10 vm03 ceph-mon[50411]: osd.0 reported immediately failed by osd.1 2026-03-06T13:40:11.110 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:40:10 vm03 ceph-mon[50411]: osd.0 reported immediately failed by osd.1 2026-03-06T13:40:11.110 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:40:10 vm03 ceph-mon[50411]: osd.0 reported immediately failed by osd.1 2026-03-06T13:40:11.110 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:40:10 vm03 ceph-mon[50411]: osd.0 reported immediately failed by osd.1 2026-03-06T13:40:11.110 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:40:10 vm03 ceph-mon[50411]: osd.0 reported immediately failed by osd.1 2026-03-06T13:40:11.110 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:40:10 vm03 ceph-mon[50411]: osd.0 reported immediately failed by osd.1 2026-03-06T13:40:11.110 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:40:10 vm03 ceph-mon[50411]: osd.0 reported immediately failed by osd.1 2026-03-06T13:40:11.110 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:40:10 vm03 ceph-mon[50411]: osd.0 reported immediately failed by osd.1 2026-03-06T13:40:14.121 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:40:13 vm03 ceph-mon[50411]: Health check update: 2 osds down (OSD_DOWN) 2026-03-06T13:40:16.519 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:40:15 vm03 ceph-mon[50411]: osdmap e39: 3 total, 1 up, 3 in 2026-03-06T13:40:16.519 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:40:16 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mon-a[50387]: 2026-03-06T12:40:16.350+0000 7f10b3552640 -1 mon.a@0(leader) e1 get_health_metrics reporting 1 slow ops, oldest is osd_failure(failed timeout osd.1 [v2:192.168.123.103:6810/3227865652,v1:192.168.123.103:6811/3227865652] for 41sec e35 v35) 2026-03-06T13:40:16.519 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:40:16 vm03 podman[106092]: 2026-03-06 13:40:16.508014429 +0100 CET m=+13.080066839 container died 0f944efab3bba92d6924171934dca2b9b075a8c7149b8be67d75fb8833411ab9 (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8) 2026-03-06T13:40:17.116 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:40:16 vm03 podman[106092]: 2026-03-06 13:40:16.909485837 +0100 CET m=+13.481538257 container remove 0f944efab3bba92d6924171934dca2b9b075a8c7149b8be67d75fb8833411ab9 (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2) 2026-03-06T13:40:17.116 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:40:16 vm03 systemd[1]: ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@osd.0.service: Main process exited, code=exited, status=137/n/a 2026-03-06T13:40:17.116 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:40:16 vm03 podman[106040]: 2026-03-06 13:40:16.834302942 +0100 CET m=+39.892158069 container remove 53b9e18f399cb63ac3bf466dea26573261e0caa954c843ad7391ecd9b4ae242f (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8) 2026-03-06T13:40:17.116 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:40:16 vm03 systemd[1]: ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@osd.2.service: Main process exited, code=exited, status=137/n/a 2026-03-06T13:40:19.376 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:40:19 vm03 podman[106386]: 2026-03-06 13:40:18.822543264 +0100 CET m=+0.953247864 image pull 306e97de47e91c2b4b24d3dc09be3b3a12039b078f343d91220102acc6628a68 harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3 2026-03-06T13:40:20.160 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:40:19 vm03 ceph-mon[50411]: osdmap e40: 3 total, 1 up, 3 in 2026-03-06T13:40:32.108 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:40:30 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mon-a[50387]: 2026-03-06T12:40:30.938+0000 7f10b3552640 -1 mon.a@0(leader) e1 get_health_metrics reporting 1 slow ops, oldest is osd_failure(failed timeout osd.1 [v2:192.168.123.103:6810/3227865652,v1:192.168.123.103:6811/3227865652] for 41sec e35 v35) 2026-03-06T13:40:34.932 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:40:33 vm03 podman[106386]: 2026-03-06 13:40:33.130343305 +0100 CET m=+15.261047915 container create ccb9fef23315587eb07d5c79e18dd5ab32be0f3a266c43cdd5f0a8833f8e8dfe (image=harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8) 2026-03-06T13:40:45.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:40:43 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mon-a[50387]: 2026-03-06T12:40:42.636+0000 7f10b3552640 -1 mon.a@0(leader) e1 get_health_metrics reporting 1 slow ops, oldest is osd_failure(failed timeout osd.1 [v2:192.168.123.103:6810/3227865652,v1:192.168.123.103:6811/3227865652] for 41sec e35 v35) 2026-03-06T13:40:45.231 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:40:40 vm03 systemd[1]: ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@osd.1.service: A process of this unit has been killed by the OOM killer. 2026-03-06T13:40:55.583 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:40:54 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mon-a[50387]: 2026-03-06T12:40:54.860+0000 7f10b3552640 -1 mon.a@0(leader) e1 get_health_metrics reporting 1 slow ops, oldest is osd_failure(failed timeout osd.1 [v2:192.168.123.103:6810/3227865652,v1:192.168.123.103:6811/3227865652] for 41sec e35 v35) 2026-03-06T13:41:02.172 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:41:01 vm03 podman[106386]: 2026-03-06 13:41:01.463167919 +0100 CET m=+43.593872530 container init ccb9fef23315587eb07d5c79e18dd5ab32be0f3a266c43cdd5f0a8833f8e8dfe (image=harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2) 2026-03-06T13:41:05.600 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:41:05 vm03 podman[106386]: 2026-03-06 13:41:05.313264665 +0100 CET m=+47.443969275 container start ccb9fef23315587eb07d5c79e18dd5ab32be0f3a266c43cdd5f0a8833f8e8dfe (image=harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8) 2026-03-06T13:41:05.600 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:41:05 vm03 bash[106386]: ccb9fef23315587eb07d5c79e18dd5ab32be0f3a266c43cdd5f0a8833f8e8dfe 2026-03-06T13:41:05.600 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:41:05 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mon-a[50387]: 2026-03-06T12:41:05.432+0000 7f10b3552640 -1 mon.a@0(leader) e1 get_health_metrics reporting 1 slow ops, oldest is osd_failure(failed timeout osd.1 [v2:192.168.123.103:6810/3227865652,v1:192.168.123.103:6811/3227865652] for 41sec e35 v35) 2026-03-06T13:41:05.899 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:41:05 vm03 systemd[1]: Started Ceph mgr.a for b4d7b36a-1958-11f1-a2a1-8fd8798eb057. 2026-03-06T13:41:07.376 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:41:07 vm03 ceph-mgr[106782]: -- 192.168.123.103:0/1059541146 <== mon.0 v2:192.168.123.103:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x56104778f380 con 0x56104774c800 2026-03-06T13:41:07.376 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:41:07 vm03 ceph-mgr[106782]: -- 192.168.123.103:0/1059541146 <== mon.0 v2:192.168.123.103:3300/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 194+0+0 (secure 0 0 0) 0x5610477772c0 con 0x56104774c800 2026-03-06T13:41:07.815 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:07 vm03 podman[106683]: 2026-03-06 13:41:07.382884586 +0100 CET m=+14.606175000 container died 47ea6d59d261434362a5095f8b8d912847e93835360afd6ea6e08b5b55d7436f (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True) 2026-03-06T13:41:08.109 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:08 vm03 podman[107277]: 2026-03-06 13:41:07.931503181 +0100 CET m=+0.365442610 image pull 306e97de47e91c2b4b24d3dc09be3b3a12039b078f343d91220102acc6628a68 harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b 2026-03-06T13:41:08.109 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:08 vm03 podman[107277]: 2026-03-06 13:41:08.050054435 +0100 CET m=+0.483993864 container create d7407b368eedf24cb5b99838c85400a5a4f2c92cfbfd6c66bb6b7dbe227e2e28 (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2-deactivate, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-06T13:41:08.109 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:07 vm03 podman[106683]: 2026-03-06 13:41:07.937822218 +0100 CET m=+15.161112622 container remove 47ea6d59d261434362a5095f8b8d912847e93835360afd6ea6e08b5b55d7436f (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git) 2026-03-06T13:41:09.334 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:09 vm03 systemd[1]: ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@osd.1.service: Main process exited, code=exited, status=137/n/a 2026-03-06T13:41:09.335 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:09 vm03 podman[107277]: 2026-03-06 13:41:09.144809109 +0100 CET m=+1.578748538 container init d7407b368eedf24cb5b99838c85400a5a4f2c92cfbfd6c66bb6b7dbe227e2e28 (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2-deactivate, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git) 2026-03-06T13:41:09.618 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:09 vm03 podman[107314]: 2026-03-06 13:41:09.479607925 +0100 CET m=+0.362742558 image pull 306e97de47e91c2b4b24d3dc09be3b3a12039b078f343d91220102acc6628a68 harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b 2026-03-06T13:41:09.618 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:09 vm03 podman[107314]: 2026-03-06 13:41:09.56982126 +0100 CET m=+0.452955893 container create 0e93363ace912528787d516944ee5e9a84a083ec8da4cb0e5e80c4416d5f42fa (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0-deactivate, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-06T13:41:09.618 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:09 vm03 podman[107277]: 2026-03-06 13:41:09.302700111 +0100 CET m=+1.736639540 container start d7407b368eedf24cb5b99838c85400a5a4f2c92cfbfd6c66bb6b7dbe227e2e28 (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2-deactivate, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default) 2026-03-06T13:41:09.618 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:09 vm03 podman[107277]: 2026-03-06 13:41:09.350475017 +0100 CET m=+1.784414446 container attach d7407b368eedf24cb5b99838c85400a5a4f2c92cfbfd6c66bb6b7dbe227e2e28 (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2-deactivate, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-06T13:41:10.132 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:41:09 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[106682]: 2026-03-06T12:41:09.826+0000 7f5b197a4100 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-06T13:41:10.132 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:09 vm03 podman[107314]: 2026-03-06 13:41:09.957615243 +0100 CET m=+0.840749886 container init 0e93363ace912528787d516944ee5e9a84a083ec8da4cb0e5e80c4416d5f42fa (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0-deactivate, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8) 2026-03-06T13:41:10.632 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:10 vm03 podman[107314]: 2026-03-06 13:41:10.131220156 +0100 CET m=+1.014354789 container start 0e93363ace912528787d516944ee5e9a84a083ec8da4cb0e5e80c4416d5f42fa (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0-deactivate, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-06T13:41:10.632 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:10 vm03 podman[107314]: 2026-03-06 13:41:10.18281427 +0100 CET m=+1.065948903 container attach 0e93363ace912528787d516944ee5e9a84a083ec8da4cb0e5e80c4416d5f42fa (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0-deactivate, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default) 2026-03-06T13:41:11.149 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:41:10 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mon-a[50387]: 2026-03-06T12:41:10.611+0000 7f10b3552640 -1 mon.a@0(leader) e1 get_health_metrics reporting 1 slow ops, oldest is osd_failure(failed timeout osd.1 [v2:192.168.123.103:6810/3227865652,v1:192.168.123.103:6811/3227865652] for 41sec e35 v35) 2026-03-06T13:41:12.362 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:41:12 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[106682]: 2026-03-06T12:41:12.043+0000 7f5b197a4100 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-06T13:41:12.362 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:12 vm03 podman[107574]: 2026-03-06 13:41:12.221385109 +0100 CET m=+0.151155707 container create 8c6c736e427cdb7998dcace4e002dd598979942423049b36a279bc9b542a3ab7 (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1-deactivate, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git) 2026-03-06T13:41:12.362 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:12 vm03 podman[107574]: 2026-03-06 13:41:12.162720632 +0100 CET m=+0.092491240 image pull 306e97de47e91c2b4b24d3dc09be3b3a12039b078f343d91220102acc6628a68 harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b 2026-03-06T13:41:12.611 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:12 vm03 podman[107574]: 2026-03-06 13:41:12.45938809 +0100 CET m=+0.389158688 container init 8c6c736e427cdb7998dcace4e002dd598979942423049b36a279bc9b542a3ab7 (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1-deactivate, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2) 2026-03-06T13:41:12.611 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:12 vm03 podman[107574]: 2026-03-06 13:41:12.583036302 +0100 CET m=+0.512806900 container start 8c6c736e427cdb7998dcace4e002dd598979942423049b36a279bc9b542a3ab7 (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1-deactivate, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-06T13:41:12.611 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:12 vm03 podman[107574]: 2026-03-06 13:41:12.599803735 +0100 CET m=+0.529574333 container attach 8c6c736e427cdb7998dcace4e002dd598979942423049b36a279bc9b542a3ab7 (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1-deactivate, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-06T13:41:13.727 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:13 vm03 conmon[107664]: conmon 8c6c736e427cdb7998dc : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8c6c736e427cdb7998dcace4e002dd598979942423049b36a279bc9b542a3ab7.scope/memory.events 2026-03-06T13:41:13.727 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:13 vm03 podman[107574]: 2026-03-06 13:41:13.445078869 +0100 CET m=+1.374849467 container died 8c6c736e427cdb7998dcace4e002dd598979942423049b36a279bc9b542a3ab7 (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1-deactivate, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True) 2026-03-06T13:41:14.109 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:13 vm03 podman[107574]: 2026-03-06 13:41:13.779729256 +0100 CET m=+1.709499845 container remove 8c6c736e427cdb7998dcace4e002dd598979942423049b36a279bc9b542a3ab7 (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1-deactivate, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552) 2026-03-06T13:41:14.110 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:13 vm03 systemd[1]: ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@osd.1.service: Failed with result 'exit-code'. 2026-03-06T13:41:14.110 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:13 vm03 systemd[1]: ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@osd.1.service: Consumed 23.282s CPU time. 2026-03-06T13:41:16.110 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:41:15 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mon-a[50387]: 2026-03-06T12:41:15.621+0000 7f10b3552640 -1 mon.a@0(leader) e1 get_health_metrics reporting 1 slow ops, oldest is osd_failure(failed timeout osd.1 [v2:192.168.123.103:6810/3227865652,v1:192.168.123.103:6811/3227865652] for 41sec e35 v35) 2026-03-06T13:41:18.347 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:41:18 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[106682]: 2026-03-06T12:41:18.086+0000 7f5b197a4100 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-06T13:41:18.610 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:18 vm03 conmon[107307]: conmon d7407b368eedf24cb5b9 : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d7407b368eedf24cb5b99838c85400a5a4f2c92cfbfd6c66bb6b7dbe227e2e28.scope/memory.events 2026-03-06T13:41:18.610 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:18 vm03 podman[107277]: 2026-03-06 13:41:18.348706655 +0100 CET m=+10.782646084 container died d7407b368eedf24cb5b99838c85400a5a4f2c92cfbfd6c66bb6b7dbe227e2e28 (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2-deactivate, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-06T13:41:18.930 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:18 vm03 podman[107277]: 2026-03-06 13:41:18.614859211 +0100 CET m=+11.048798640 container remove d7407b368eedf24cb5b99838c85400a5a4f2c92cfbfd6c66bb6b7dbe227e2e28 (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2-deactivate, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default) 2026-03-06T13:41:18.930 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:18 vm03 systemd[1]: ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@osd.2.service: Failed with result 'exit-code'. 2026-03-06T13:41:18.930 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:18 vm03 systemd[1]: ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@osd.2.service: Consumed 23.239s CPU time. 2026-03-06T13:41:19.210 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:18 vm03 conmon[107355]: conmon 0e93363ace912528787d : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0e93363ace912528787d516944ee5e9a84a083ec8da4cb0e5e80c4416d5f42fa.scope/memory.events 2026-03-06T13:41:19.210 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:18 vm03 podman[107314]: 2026-03-06 13:41:18.934910356 +0100 CET m=+9.818044989 container died 0e93363ace912528787d516944ee5e9a84a083ec8da4cb0e5e80c4416d5f42fa (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0-deactivate, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True) 2026-03-06T13:41:19.211 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:19 vm03 podman[107314]: 2026-03-06 13:41:19.146145397 +0100 CET m=+10.029280030 container remove 0e93363ace912528787d516944ee5e9a84a083ec8da4cb0e5e80c4416d5f42fa (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0-deactivate, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default) 2026-03-06T13:41:19.566 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:19 vm03 systemd[1]: ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@osd.0.service: Failed with result 'exit-code'. 2026-03-06T13:41:19.566 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:19 vm03 systemd[1]: ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@osd.0.service: Consumed 21.008s CPU time. 2026-03-06T13:41:20.622 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:41:20 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[106682]: 2026-03-06T12:41:20.304+0000 7f5b197a4100 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-06T13:41:20.622 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:41:20 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[106682]: 2026-03-06T12:41:20.540+0000 7f5b197a4100 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-06T13:41:21.048 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:41:20 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mon-a[50387]: 2026-03-06T12:41:20.622+0000 7f10b3552640 -1 mon.a@0(leader) e1 get_health_metrics reporting 1 slow ops, oldest is osd_failure(failed timeout osd.1 [v2:192.168.123.103:6810/3227865652,v1:192.168.123.103:6811/3227865652] for 41sec e35 v35) 2026-03-06T13:41:21.361 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:41:21 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[106682]: 2026-03-06T12:41:21.045+0000 7f5b197a4100 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-06T13:41:24.609 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:24 vm03 systemd[1]: ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@osd.1.service: Scheduled restart job, restart counter is at 1. 2026-03-06T13:41:24.610 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:24 vm03 systemd[1]: Stopped Ceph osd.1 for b4d7b36a-1958-11f1-a2a1-8fd8798eb057. 2026-03-06T13:41:24.610 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:24 vm03 systemd[1]: ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@osd.1.service: Consumed 23.282s CPU time. 2026-03-06T13:41:24.610 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:24 vm03 systemd[1]: Starting Ceph osd.1 for b4d7b36a-1958-11f1-a2a1-8fd8798eb057... 2026-03-06T13:41:24.936 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:24 vm03 podman[113990]: 2026-03-06 13:41:24.644354854 +0100 CET m=+0.063405502 container create 595c6dc4efebc784b89f612a5968f1e460f43e3ad8d4987f1144cb0525f4d3ed (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1-activate, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git) 2026-03-06T13:41:24.936 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:24 vm03 podman[113990]: 2026-03-06 13:41:24.618579312 +0100 CET m=+0.037629970 image pull 306e97de47e91c2b4b24d3dc09be3b3a12039b078f343d91220102acc6628a68 harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b 2026-03-06T13:41:24.936 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:24 vm03 podman[113990]: 2026-03-06 13:41:24.762860762 +0100 CET m=+0.181911410 container init 595c6dc4efebc784b89f612a5968f1e460f43e3ad8d4987f1144cb0525f4d3ed (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1-activate, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git) 2026-03-06T13:41:24.936 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:24 vm03 podman[113990]: 2026-03-06 13:41:24.774201318 +0100 CET m=+0.193251966 container start 595c6dc4efebc784b89f612a5968f1e460f43e3ad8d4987f1144cb0525f4d3ed (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9) 2026-03-06T13:41:24.936 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:24 vm03 podman[113990]: 2026-03-06 13:41:24.780627174 +0100 CET m=+0.199677822 container attach 595c6dc4efebc784b89f612a5968f1e460f43e3ad8d4987f1144cb0525f4d3ed (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1-activate, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default) 2026-03-06T13:41:25.364 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:41:24 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[106682]: 2026-03-06T12:41:24.931+0000 7f5b197a4100 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-06T13:41:25.859 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:41:25 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mon-a[50387]: 2026-03-06T12:41:25.623+0000 7f10b3552640 -1 mon.a@0(leader) e1 get_health_metrics reporting 1 slow ops, oldest is osd_failure(failed timeout osd.1 [v2:192.168.123.103:6810/3227865652,v1:192.168.123.103:6811/3227865652] for 41sec e35 v35) 2026-03-06T13:41:25.860 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:25 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1-activate[114071]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-06T13:41:25.860 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:25 vm03 bash[113990]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-06T13:41:25.860 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:25 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1-activate[114071]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-06T13:41:25.860 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:25 vm03 bash[113990]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-06T13:41:26.195 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:41:25 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[106682]: 2026-03-06T12:41:25.927+0000 7f5b197a4100 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-06T13:41:26.605 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:41:26 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[106682]: 2026-03-06T12:41:26.190+0000 7f5b197a4100 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-06T13:41:26.859 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:41:26 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[106682]: 2026-03-06T12:41:26.597+0000 7f5b197a4100 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-06T13:41:27.229 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:41:26 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[106682]: 2026-03-06T12:41:26.935+0000 7f5b197a4100 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-06T13:41:27.609 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:41:27 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[106682]: 2026-03-06T12:41:27.225+0000 7f5b197a4100 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-06T13:41:28.360 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:27 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1-activate[114071]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-06T13:41:28.360 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:27 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1-activate[114071]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-06T13:41:28.360 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:27 vm03 bash[113990]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-06T13:41:28.360 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:27 vm03 bash[113990]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-06T13:41:28.360 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:27 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1-activate[114071]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-06T13:41:28.360 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:27 vm03 bash[113990]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-06T13:41:28.360 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:27 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1-activate[114071]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 2026-03-06T13:41:28.360 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:27 vm03 bash[113990]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 2026-03-06T13:41:28.360 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:28 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1-activate[114071]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-0a12af76-a838-4813-b866-658ad8d97b62/osd-block-dcfd0b5e-f0e8-4d27-9ba3-77494068f199 --path /var/lib/ceph/osd/ceph-1 --no-mon-config 2026-03-06T13:41:28.360 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:28 vm03 bash[113990]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-0a12af76-a838-4813-b866-658ad8d97b62/osd-block-dcfd0b5e-f0e8-4d27-9ba3-77494068f199 --path /var/lib/ceph/osd/ceph-1 --no-mon-config 2026-03-06T13:41:28.652 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:28 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1-activate[114071]: Running command: /usr/bin/ln -snf /dev/ceph-0a12af76-a838-4813-b866-658ad8d97b62/osd-block-dcfd0b5e-f0e8-4d27-9ba3-77494068f199 /var/lib/ceph/osd/ceph-1/block 2026-03-06T13:41:28.652 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:28 vm03 bash[113990]: Running command: /usr/bin/ln -snf /dev/ceph-0a12af76-a838-4813-b866-658ad8d97b62/osd-block-dcfd0b5e-f0e8-4d27-9ba3-77494068f199 /var/lib/ceph/osd/ceph-1/block 2026-03-06T13:41:28.652 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:28 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1-activate[114071]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block 2026-03-06T13:41:28.652 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:28 vm03 bash[113990]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block 2026-03-06T13:41:28.652 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:28 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1-activate[114071]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 2026-03-06T13:41:28.652 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:28 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1-activate[114071]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 2026-03-06T13:41:28.652 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:28 vm03 bash[113990]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 2026-03-06T13:41:28.652 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:28 vm03 bash[113990]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 2026-03-06T13:41:28.652 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:28 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1-activate[114071]: --> ceph-volume lvm activate successful for osd ID: 1 2026-03-06T13:41:28.652 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:28 vm03 bash[113990]: --> ceph-volume lvm activate successful for osd ID: 1 2026-03-06T13:41:29.063 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:41:28 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[106682]: 2026-03-06T12:41:28.811+0000 7f5b197a4100 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-06T13:41:29.063 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:28 vm03 podman[113990]: 2026-03-06 13:41:28.650471141 +0100 CET m=+4.069521789 container died 595c6dc4efebc784b89f612a5968f1e460f43e3ad8d4987f1144cb0525f4d3ed (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1-activate, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default) 2026-03-06T13:41:29.063 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:28 vm03 podman[113990]: 2026-03-06 13:41:28.896696682 +0100 CET m=+4.315747319 container remove 595c6dc4efebc784b89f612a5968f1e460f43e3ad8d4987f1144cb0525f4d3ed (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1-activate, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-06T13:41:29.063 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:28 vm03 systemd[1]: ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@osd.2.service: Scheduled restart job, restart counter is at 1. 2026-03-06T13:41:29.063 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:28 vm03 systemd[1]: Stopped Ceph osd.2 for b4d7b36a-1958-11f1-a2a1-8fd8798eb057. 2026-03-06T13:41:29.063 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:28 vm03 systemd[1]: ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@osd.2.service: Consumed 23.239s CPU time. 2026-03-06T13:41:29.063 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:28 vm03 systemd[1]: Starting Ceph osd.2 for b4d7b36a-1958-11f1-a2a1-8fd8798eb057... 2026-03-06T13:41:29.317 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:29 vm03 podman[116194]: 2026-03-06 13:41:29.061621417 +0100 CET m=+0.068247945 container create b34a6d3582dd1acce3ea1edfe0749015cf6bc5713aabb92c03776bed2b8d23ae (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2-activate, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552) 2026-03-06T13:41:29.317 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:29 vm03 podman[116194]: 2026-03-06 13:41:29.037868571 +0100 CET m=+0.044495099 image pull 306e97de47e91c2b4b24d3dc09be3b3a12039b078f343d91220102acc6628a68 harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b 2026-03-06T13:41:29.317 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:29 vm03 podman[116194]: 2026-03-06 13:41:29.151093174 +0100 CET m=+0.157719702 container init b34a6d3582dd1acce3ea1edfe0749015cf6bc5713aabb92c03776bed2b8d23ae (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2-activate, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git) 2026-03-06T13:41:29.317 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:29 vm03 podman[116194]: 2026-03-06 13:41:29.193713304 +0100 CET m=+0.200339832 container start b34a6d3582dd1acce3ea1edfe0749015cf6bc5713aabb92c03776bed2b8d23ae (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2-activate, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-06T13:41:29.317 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:29 vm03 podman[116194]: 2026-03-06 13:41:29.196905427 +0100 CET m=+0.203531955 container attach b34a6d3582dd1acce3ea1edfe0749015cf6bc5713aabb92c03776bed2b8d23ae (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2-activate, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-06T13:41:29.317 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:29 vm03 podman[116408]: 2026-03-06 13:41:29.305926304 +0100 CET m=+0.075949830 container create a79630f11d1582b6ba8424b3539a194afb1355ff8a9aad3ffcdf2d37e0af504a (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git) 2026-03-06T13:41:29.605 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:29 vm03 podman[116408]: 2026-03-06 13:41:29.265687241 +0100 CET m=+0.035710767 image pull 306e97de47e91c2b4b24d3dc09be3b3a12039b078f343d91220102acc6628a68 harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b 2026-03-06T13:41:29.605 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:29 vm03 systemd[1]: ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@osd.0.service: Scheduled restart job, restart counter is at 1. 2026-03-06T13:41:29.605 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:29 vm03 systemd[1]: Stopped Ceph osd.0 for b4d7b36a-1958-11f1-a2a1-8fd8798eb057. 2026-03-06T13:41:29.605 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:29 vm03 systemd[1]: ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@osd.0.service: Consumed 21.008s CPU time. 2026-03-06T13:41:29.605 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:29 vm03 systemd[1]: Starting Ceph osd.0 for b4d7b36a-1958-11f1-a2a1-8fd8798eb057... 2026-03-06T13:41:29.860 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:29 vm03 podman[116408]: 2026-03-06 13:41:29.629910328 +0100 CET m=+0.399933854 container init a79630f11d1582b6ba8424b3539a194afb1355ff8a9aad3ffcdf2d37e0af504a (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2) 2026-03-06T13:41:29.860 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:29 vm03 podman[116408]: 2026-03-06 13:41:29.6987715 +0100 CET m=+0.468795016 container start a79630f11d1582b6ba8424b3539a194afb1355ff8a9aad3ffcdf2d37e0af504a (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2) 2026-03-06T13:41:29.860 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:29 vm03 bash[116408]: a79630f11d1582b6ba8424b3539a194afb1355ff8a9aad3ffcdf2d37e0af504a 2026-03-06T13:41:29.860 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:29 vm03 systemd[1]: Started Ceph osd.1 for b4d7b36a-1958-11f1-a2a1-8fd8798eb057. 2026-03-06T13:41:29.861 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:29 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2-activate[116304]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-06T13:41:29.861 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:29 vm03 bash[116194]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-06T13:41:29.861 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:29 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2-activate[116304]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-06T13:41:29.861 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:29 vm03 bash[116194]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-06T13:41:29.861 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:41:29 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[106682]: 2026-03-06T12:41:29.592+0000 7f5b197a4100 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-06T13:41:30.421 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:30 vm03 podman[116844]: 2026-03-06 13:41:30.148311578 +0100 CET m=+0.187755698 container create 8130caae61ef7b372ffd3997d8177863b7f6a45c4fd7d5aa23e98d85ea1853fa (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0-activate, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default) 2026-03-06T13:41:30.422 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:30 vm03 podman[116844]: 2026-03-06 13:41:30.054054945 +0100 CET m=+0.093499076 image pull 306e97de47e91c2b4b24d3dc09be3b3a12039b078f343d91220102acc6628a68 harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b 2026-03-06T13:41:30.422 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:30 vm03 podman[116844]: 2026-03-06 13:41:30.32105607 +0100 CET m=+0.360500191 container init 8130caae61ef7b372ffd3997d8177863b7f6a45c4fd7d5aa23e98d85ea1853fa (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0-activate, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2) 2026-03-06T13:41:30.698 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:41:30 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mon-a[50387]: 2026-03-06T12:41:30.626+0000 7f10b3552640 -1 mon.a@0(leader) e1 get_health_metrics reporting 1 slow ops, oldest is osd_failure(failed timeout osd.1 [v2:192.168.123.103:6810/3227865652,v1:192.168.123.103:6811/3227865652] for 41sec e35 v35) 2026-03-06T13:41:30.699 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:30 vm03 podman[116844]: 2026-03-06 13:41:30.427790407 +0100 CET m=+0.467234527 container start 8130caae61ef7b372ffd3997d8177863b7f6a45c4fd7d5aa23e98d85ea1853fa (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0-activate, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default) 2026-03-06T13:41:30.699 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:30 vm03 podman[116844]: 2026-03-06 13:41:30.434937523 +0100 CET m=+0.474381643 container attach 8130caae61ef7b372ffd3997d8177863b7f6a45c4fd7d5aa23e98d85ea1853fa (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9) 2026-03-06T13:41:31.109 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:30 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1[116512]: 2026-03-06T12:41:30.855+0000 7f4a03ebb740 -1 Falling back to public interface 2026-03-06T13:41:31.610 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:31 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0-activate[117038]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-06T13:41:31.610 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:31 vm03 bash[116844]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-06T13:41:31.610 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:31 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0-activate[117038]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-06T13:41:31.610 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:31 vm03 bash[116844]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-06T13:41:32.110 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:31 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2-activate[116304]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-06T13:41:32.110 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:31 vm03 bash[116194]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-06T13:41:32.110 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:31 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2-activate[116304]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-06T13:41:32.110 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:31 vm03 bash[116194]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-06T13:41:32.110 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:31 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2-activate[116304]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-06T13:41:32.110 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:31 vm03 bash[116194]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-06T13:41:32.110 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:31 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2-activate[116304]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 2026-03-06T13:41:32.110 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:31 vm03 bash[116194]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 2026-03-06T13:41:32.110 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:31 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2-activate[116304]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-cb081433-f87b-4948-aa7e-72f7b0f4875a/osd-block-236345e0-86d2-4671-a2c3-ba26e1d204fd --path /var/lib/ceph/osd/ceph-2 --no-mon-config 2026-03-06T13:41:32.110 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:31 vm03 bash[116194]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-cb081433-f87b-4948-aa7e-72f7b0f4875a/osd-block-236345e0-86d2-4671-a2c3-ba26e1d204fd --path /var/lib/ceph/osd/ceph-2 --no-mon-config 2026-03-06T13:41:32.610 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:32 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2-activate[116304]: Running command: /usr/bin/ln -snf /dev/ceph-cb081433-f87b-4948-aa7e-72f7b0f4875a/osd-block-236345e0-86d2-4671-a2c3-ba26e1d204fd /var/lib/ceph/osd/ceph-2/block 2026-03-06T13:41:32.610 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:32 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2-activate[116304]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block 2026-03-06T13:41:32.610 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:32 vm03 bash[116194]: Running command: /usr/bin/ln -snf /dev/ceph-cb081433-f87b-4948-aa7e-72f7b0f4875a/osd-block-236345e0-86d2-4671-a2c3-ba26e1d204fd /var/lib/ceph/osd/ceph-2/block 2026-03-06T13:41:32.610 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:32 vm03 bash[116194]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block 2026-03-06T13:41:32.610 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:32 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2-activate[116304]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2 2026-03-06T13:41:32.610 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:32 vm03 bash[116194]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2 2026-03-06T13:41:32.610 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:32 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2-activate[116304]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 2026-03-06T13:41:32.610 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:32 vm03 bash[116194]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 2026-03-06T13:41:32.610 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:32 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2-activate[116304]: --> ceph-volume lvm activate successful for osd ID: 2 2026-03-06T13:41:32.610 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:32 vm03 bash[116194]: --> ceph-volume lvm activate successful for osd ID: 2 2026-03-06T13:41:32.610 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:32 vm03 conmon[116304]: conmon b34a6d3582dd1acce3ea : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b34a6d3582dd1acce3ea1edfe0749015cf6bc5713aabb92c03776bed2b8d23ae.scope/memory.events 2026-03-06T13:41:32.610 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:32 vm03 podman[116194]: 2026-03-06 13:41:32.450351922 +0100 CET m=+3.456978440 container died b34a6d3582dd1acce3ea1edfe0749015cf6bc5713aabb92c03776bed2b8d23ae (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9) 2026-03-06T13:41:32.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:41:32 vm03 ceph-mon[50411]: from='osd.1 [v2:192.168.123.103:6800/1783295990,v1:192.168.123.103:6801/1783295990]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-06T13:41:32.611 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:32 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1[116512]: 2026-03-06T12:41:32.343+0000 7f4a03ebb740 -1 osd.1 40 log_to_monitors true 2026-03-06T13:41:32.611 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:32 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1[116512]: 2026-03-06T12:41:32.501+0000 7f49fb465640 -1 osd.1 40 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-06T13:41:33.110 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:32 vm03 podman[116194]: 2026-03-06 13:41:32.817658781 +0100 CET m=+3.824285309 container remove b34a6d3582dd1acce3ea1edfe0749015cf6bc5713aabb92c03776bed2b8d23ae (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2-activate, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2) 2026-03-06T13:41:33.422 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:33 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0-activate[117038]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-06T13:41:33.422 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:33 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0-activate[117038]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-06T13:41:33.422 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:33 vm03 bash[116844]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-06T13:41:33.422 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:33 vm03 bash[116844]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-06T13:41:33.422 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:33 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0-activate[117038]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-06T13:41:33.422 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:33 vm03 bash[116844]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-06T13:41:33.422 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:33 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0-activate[117038]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 2026-03-06T13:41:33.422 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:33 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0-activate[117038]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-bc8309e2-6876-472f-989d-2cbcb6e84256/osd-block-314c4c77-2809-4001-a1fe-5031b74f6cd2 --path /var/lib/ceph/osd/ceph-0 --no-mon-config 2026-03-06T13:41:33.422 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:33 vm03 bash[116844]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 2026-03-06T13:41:33.422 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:33 vm03 bash[116844]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-bc8309e2-6876-472f-989d-2cbcb6e84256/osd-block-314c4c77-2809-4001-a1fe-5031b74f6cd2 --path /var/lib/ceph/osd/ceph-0 --no-mon-config 2026-03-06T13:41:33.698 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:41:33 vm03 ceph-mon[50411]: from='osd.1 [v2:192.168.123.103:6800/1783295990,v1:192.168.123.103:6801/1783295990]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-06T13:41:33.698 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:41:33 vm03 ceph-mon[50411]: osdmap e41: 3 total, 1 up, 3 in 2026-03-06T13:41:33.698 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:41:33 vm03 ceph-mon[50411]: from='osd.1 [v2:192.168.123.103:6800/1783295990,v1:192.168.123.103:6801/1783295990]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-06T13:41:33.699 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:33 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0-activate[117038]: Running command: /usr/bin/ln -snf /dev/ceph-bc8309e2-6876-472f-989d-2cbcb6e84256/osd-block-314c4c77-2809-4001-a1fe-5031b74f6cd2 /var/lib/ceph/osd/ceph-0/block 2026-03-06T13:41:33.699 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:33 vm03 bash[116844]: Running command: /usr/bin/ln -snf /dev/ceph-bc8309e2-6876-472f-989d-2cbcb6e84256/osd-block-314c4c77-2809-4001-a1fe-5031b74f6cd2 /var/lib/ceph/osd/ceph-0/block 2026-03-06T13:41:33.699 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:33 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0-activate[117038]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block 2026-03-06T13:41:33.699 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:33 vm03 bash[116844]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block 2026-03-06T13:41:33.699 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:33 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0-activate[117038]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 2026-03-06T13:41:33.699 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:33 vm03 bash[116844]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 2026-03-06T13:41:33.699 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:33 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0-activate[117038]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 2026-03-06T13:41:33.699 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:33 vm03 bash[116844]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 2026-03-06T13:41:33.699 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:33 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0-activate[117038]: --> ceph-volume lvm activate successful for osd ID: 0 2026-03-06T13:41:33.699 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:33 vm03 bash[116844]: --> ceph-volume lvm activate successful for osd ID: 0 2026-03-06T13:41:33.699 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:33 vm03 conmon[117038]: conmon 8130caae61ef7b372ffd : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8130caae61ef7b372ffd3997d8177863b7f6a45c4fd7d5aa23e98d85ea1853fa.scope/memory.events 2026-03-06T13:41:33.699 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:33 vm03 podman[116844]: 2026-03-06 13:41:33.593898757 +0100 CET m=+3.633342867 container died 8130caae61ef7b372ffd3997d8177863b7f6a45c4fd7d5aa23e98d85ea1853fa (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0-activate, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-06T13:41:33.699 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:33 vm03 podman[119130]: 2026-03-06 13:41:33.420881384 +0100 CET m=+0.094153861 container create 58316ab89fcac138d40e58c11ca22d78645ef8364df7214d24330d59109bcbb0 (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True) 2026-03-06T13:41:33.699 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:33 vm03 podman[119130]: 2026-03-06 13:41:33.384290379 +0100 CET m=+0.057562867 image pull 306e97de47e91c2b4b24d3dc09be3b3a12039b078f343d91220102acc6628a68 harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b 2026-03-06T13:41:33.699 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:33 vm03 podman[119130]: 2026-03-06 13:41:33.559399878 +0100 CET m=+0.232672355 container init 58316ab89fcac138d40e58c11ca22d78645ef8364df7214d24330d59109bcbb0 (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8) 2026-03-06T13:41:33.699 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:33 vm03 podman[119130]: 2026-03-06 13:41:33.649658578 +0100 CET m=+0.322931056 container start 58316ab89fcac138d40e58c11ca22d78645ef8364df7214d24330d59109bcbb0 (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default) 2026-03-06T13:41:33.699 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:33 vm03 bash[119130]: 58316ab89fcac138d40e58c11ca22d78645ef8364df7214d24330d59109bcbb0 2026-03-06T13:41:34.109 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:41:33 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[106682]: 2026-03-06T12:41:33.709+0000 7f5b197a4100 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-06T13:41:34.110 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:33 vm03 podman[116844]: 2026-03-06 13:41:33.871678846 +0100 CET m=+3.911122966 container remove 8130caae61ef7b372ffd3997d8177863b7f6a45c4fd7d5aa23e98d85ea1853fa (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0-activate, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2) 2026-03-06T13:41:34.110 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:33 vm03 systemd[1]: Started Ceph osd.2 for b4d7b36a-1958-11f1-a2a1-8fd8798eb057. 2026-03-06T13:41:34.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:41:34 vm03 ceph-mon[50411]: Health check update: 3 osds down (OSD_DOWN) 2026-03-06T13:41:34.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:41:34 vm03 ceph-mon[50411]: Health check failed: 1 host (3 osds) down (OSD_HOST_DOWN) 2026-03-06T13:41:34.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:41:34 vm03 ceph-mon[50411]: Health check failed: 1 root (3 osds) down (OSD_ROOT_DOWN) 2026-03-06T13:41:34.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:41:34 vm03 ceph-mon[50411]: osdmap e42: 3 total, 0 up, 3 in 2026-03-06T13:41:34.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:41:34 vm03 ceph-mon[50411]: Health check cleared: OSD_HOST_DOWN (was: 1 host (3 osds) down) 2026-03-06T13:41:34.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:41:34 vm03 ceph-mon[50411]: Health check cleared: OSD_ROOT_DOWN (was: 1 root (3 osds) down) 2026-03-06T13:41:34.553 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:34 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2[119223]: 2026-03-06T12:41:34.272+0000 7f355e6f5740 -1 Falling back to public interface 2026-03-06T13:41:34.553 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:34 vm03 podman[119805]: 2026-03-06 13:41:34.325773358 +0100 CET m=+0.076917731 container create e64314a32f8a2862cc5326ad994debb85b77594e4b213a717f384ada41483cef (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git) 2026-03-06T13:41:34.553 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:34 vm03 podman[119805]: 2026-03-06 13:41:34.297668286 +0100 CET m=+0.048812669 image pull 306e97de47e91c2b4b24d3dc09be3b3a12039b078f343d91220102acc6628a68 harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b 2026-03-06T13:41:34.553 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:34 vm03 podman[119805]: 2026-03-06 13:41:34.470089333 +0100 CET m=+0.221233695 container init e64314a32f8a2862cc5326ad994debb85b77594e4b213a717f384ada41483cef (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8) 2026-03-06T13:41:34.859 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:34 vm03 podman[119805]: 2026-03-06 13:41:34.551626139 +0100 CET m=+0.302770512 container start e64314a32f8a2862cc5326ad994debb85b77594e4b213a717f384ada41483cef (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-06T13:41:34.860 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:34 vm03 bash[119805]: e64314a32f8a2862cc5326ad994debb85b77594e4b213a717f384ada41483cef 2026-03-06T13:41:34.860 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:34 vm03 systemd[1]: Started Ceph osd.0 for b4d7b36a-1958-11f1-a2a1-8fd8798eb057. 2026-03-06T13:41:35.609 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:35 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2[119223]: 2026-03-06T12:41:35.341+0000 7f355e6f5740 -1 osd.2 35 log_to_monitors true 2026-03-06T13:41:35.610 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:35 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2[119223]: 2026-03-06T12:41:35.508+0000 7f3555c9f640 -1 osd.2 35 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-06T13:41:35.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:41:35 vm03 ceph-mon[50411]: osd.1 [v2:192.168.123.103:6800/1783295990,v1:192.168.123.103:6801/1783295990] boot 2026-03-06T13:41:35.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:41:35 vm03 ceph-mon[50411]: osdmap e43: 3 total, 1 up, 3 in 2026-03-06T13:41:35.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:41:35 vm03 ceph-mon[50411]: from='osd.2 [v2:192.168.123.103:6808/1025570988,v1:192.168.123.103:6809/1025570988]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-06T13:41:35.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:41:35 vm03 ceph-mon[50411]: from='osd.2 [v2:192.168.123.103:6808/1025570988,v1:192.168.123.103:6809/1025570988]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-06T13:41:35.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:41:35 vm03 ceph-mon[50411]: osdmap e44: 3 total, 1 up, 3 in 2026-03-06T13:41:35.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:41:35 vm03 ceph-mon[50411]: from='osd.2 [v2:192.168.123.103:6808/1025570988,v1:192.168.123.103:6809/1025570988]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-06T13:41:35.610 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:35 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0[119920]: 2026-03-06T12:41:35.455+0000 7f6c0065c740 -1 Falling back to public interface 2026-03-06T13:41:36.360 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:41:35 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[106682]: 2026-03-06T12:41:35.929+0000 7f5b197a4100 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-06T13:41:36.360 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:41:36 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[106682]: 2026-03-06T12:41:36.157+0000 7f5b197a4100 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-06T13:41:36.859 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:41:36 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[106682]: 2026-03-06T12:41:36.369+0000 7f5b197a4100 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-06T13:41:36.860 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:41:36 vm03 ceph-mon[50411]: from='osd.0 [v2:192.168.123.103:6816/2623187824,v1:192.168.123.103:6817/2623187824]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-06T13:41:36.860 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:36 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0[119920]: 2026-03-06T12:41:36.449+0000 7f6c0065c740 -1 osd.0 37 log_to_monitors true 2026-03-06T13:41:36.860 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:36 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0[119920]: 2026-03-06T12:41:36.534+0000 7f6bf7c06640 -1 osd.0 37 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-06T13:41:37.359 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:41:36 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[106682]: 2026-03-06T12:41:36.895+0000 7f5b197a4100 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-06T13:41:37.360 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:41:37 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[106682]: 2026-03-06T12:41:37.079+0000 7f5b197a4100 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-06T13:41:37.859 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:41:37 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[106682]: 2026-03-06T12:41:37.554+0000 7f5b197a4100 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-06T13:41:37.860 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:41:37 vm03 ceph-mon[50411]: from='osd.0 [v2:192.168.123.103:6816/2623187824,v1:192.168.123.103:6817/2623187824]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-06T13:41:37.860 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:41:37 vm03 ceph-mon[50411]: osd.2 [v2:192.168.123.103:6808/1025570988,v1:192.168.123.103:6809/1025570988] boot 2026-03-06T13:41:37.860 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:41:37 vm03 ceph-mon[50411]: osdmap e45: 3 total, 2 up, 3 in 2026-03-06T13:41:37.860 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:41:37 vm03 ceph-mon[50411]: from='osd.0 [v2:192.168.123.103:6816/2623187824,v1:192.168.123.103:6817/2623187824]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-06T13:41:37.860 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:41:37 vm03 ceph-mon[50411]: Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-06T13:41:37.860 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:41:37 vm03 ceph-mon[50411]: osd.0 [v2:192.168.123.103:6816/2623187824,v1:192.168.123.103:6817/2623187824] boot 2026-03-06T13:41:37.860 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:41:37 vm03 ceph-mon[50411]: osdmap e46: 3 total, 3 up, 3 in 2026-03-06T13:41:38.359 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:41:38 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[106682]: 2026-03-06T12:41:38.091+0000 7f5b197a4100 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-06T13:41:38.860 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:41:38 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[106682]: 2026-03-06T12:41:38.660+0000 7f5b197a4100 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-06T13:41:39.087 INFO:tasks.workunit.client.0.vm03.stderr:+ for i in ${ISCSI_CONT_IDS} 2026-03-06T13:41:39.088 INFO:tasks.workunit.client.0.vm03.stderr:++ sudo podman exec 94685991a50d /bin/sh -c 'ps -ef | grep -c sleep' 2026-03-06T13:41:39.126 INFO:tasks.workunit.client.0.vm03.stderr:Error: no container with name or ID "94685991a50d" found: no such container 2026-03-06T13:41:39.137 DEBUG:teuthology.orchestra.run:got remote process result: 125 2026-03-06T13:41:39.138 INFO:tasks.workunit.client.0.vm03.stderr:+ SLEEP_COUNT= 2026-03-06T13:41:39.138 INFO:tasks.workunit:Stopping ['cephadm/test_iscsi_pids_limit.sh', 'cephadm/test_iscsi_etc_hosts.sh', 'cephadm/test_iscsi_setup.sh'] on client.0... 2026-03-06T13:41:39.138 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/clone.client.0 2026-03-06T13:41:39.221 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:41:38 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a[106682]: 2026-03-06T12:41:38.936+0000 7f5b197a4100 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-06T13:41:39.545 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:41:39 vm03 ceph-mon[50411]: osdmap e47: 3 total, 3 up, 3 in 2026-03-06T13:41:39.545 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:41:39 vm03 ceph-mon[50411]: Activating manager daemon a 2026-03-06T13:41:39.545 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:41:39 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/3308357498' entity='client.iscsi.foo.vm03.ncatkq' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-06T13:41:39.545 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:41:39 vm03 ceph-mon[50411]: from='client.? 192.168.123.103:0/3233169353' entity='client.iscsi.foo.vm03.ncatkq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/4097104359"}]: dispatch 2026-03-06T13:41:39.613 ERROR:teuthology.run_tasks:Saw exception from tasks. Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 105, in run_tasks manager = run_one_task(taskname, ctx=ctx, config=config) File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 83, in run_one_task return task(**kwargs) File "/home/teuthos/src/github.com_kshtsk_ceph_5726a36c3452e5b72190cfceba828abc62c819b7/qa/tasks/workunit.py", line 125, in task with parallel() as p: File "/home/teuthos/teuthology/teuthology/parallel.py", line 84, in __exit__ for result in self: File "/home/teuthos/teuthology/teuthology/parallel.py", line 98, in __next__ resurrect_traceback(result) File "/home/teuthos/teuthology/teuthology/parallel.py", line 30, in resurrect_traceback raise exc.exc_info[1] File "/home/teuthos/teuthology/teuthology/parallel.py", line 23, in capture_traceback return func(*args, **kwargs) File "/home/teuthos/src/github.com_kshtsk_ceph_5726a36c3452e5b72190cfceba828abc62c819b7/qa/tasks/workunit.py", line 433, in _run_tests remote.run( File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed (workunit test cephadm/test_iscsi_pids_limit.sh) on vm03 with status 125: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5726a36c3452e5b72190cfceba828abc62c819b7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_iscsi_pids_limit.sh' 2026-03-06T13:41:39.614 DEBUG:teuthology.run_tasks:Unwinding manager cephadm 2026-03-06T13:41:39.616 INFO:tasks.cephadm:Teardown begin 2026-03-06T13:41:39.616 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-06T13:41:39.694 INFO:tasks.cephadm:Cleaning up testdir ceph.* files... 2026-03-06T13:41:39.694 DEBUG:teuthology.orchestra.run.vm03:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-06T13:41:39.758 INFO:tasks.cephadm:Stopping all daemons... 2026-03-06T13:41:39.758 INFO:tasks.cephadm.mon.a:Stopping mon.a... 2026-03-06T13:41:39.758 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@mon.a 2026-03-06T13:41:40.090 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:41:39 vm03 systemd[1]: Stopping Ceph mon.a for b4d7b36a-1958-11f1-a2a1-8fd8798eb057... 2026-03-06T13:41:40.090 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:41:40 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mon-a[50387]: 2026-03-06T12:41:40.010+0000 7f10b6558640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-06T13:41:40.091 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:41:40 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mon-a[50387]: 2026-03-06T12:41:40.010+0000 7f10b6558640 -1 mon.a@0(leader) e1 *** Got Signal Terminated *** 2026-03-06T13:41:40.349 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:41:40 vm03 podman[125113]: 2026-03-06 13:41:40.176249134 +0100 CET m=+0.210473206 container died 26481bcb51760faa6ca25a888a26d73dadb44a1f68997d41ab5521c2764f908a (image=harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mon-a, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True) 2026-03-06T13:41:40.557 DEBUG:teuthology.orchestra.run.vm03:> sudo pkill -f 'journalctl -f -n 0 -u ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@mon.a.service' 2026-03-06T13:41:40.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:41:40 vm03 podman[125113]: 2026-03-06 13:41:40.347685748 +0100 CET m=+0.381909820 container remove 26481bcb51760faa6ca25a888a26d73dadb44a1f68997d41ab5521c2764f908a (image=harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mon-a, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-06T13:41:40.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:41:40 vm03 bash[125113]: ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mon-a 2026-03-06T13:41:40.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:41:40 vm03 systemd[1]: ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@mon.a.service: Deactivated successfully. 2026-03-06T13:41:40.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:41:40 vm03 systemd[1]: Stopped Ceph mon.a for b4d7b36a-1958-11f1-a2a1-8fd8798eb057. 2026-03-06T13:41:40.610 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 06 13:41:40 vm03 systemd[1]: ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@mon.a.service: Consumed 20.773s CPU time. 2026-03-06T13:41:41.363 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-06T13:41:41.363 INFO:tasks.cephadm.mon.a:Stopped mon.a 2026-03-06T13:41:41.363 INFO:tasks.cephadm.mgr.a:Stopping mgr.a... 2026-03-06T13:41:41.364 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@mgr.a 2026-03-06T13:41:41.610 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:41:41 vm03 systemd[1]: Stopping Ceph mgr.a for b4d7b36a-1958-11f1-a2a1-8fd8798eb057... 2026-03-06T13:41:41.879 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:41:41 vm03 podman[125430]: 2026-03-06 13:41:41.620915871 +0100 CET m=+0.091039433 container died ccb9fef23315587eb07d5c79e18dd5ab32be0f3a266c43cdd5f0a8833f8e8dfe (image=harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-06T13:41:41.879 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:41:41 vm03 podman[125430]: 2026-03-06 13:41:41.78196296 +0100 CET m=+0.252086522 container remove ccb9fef23315587eb07d5c79e18dd5ab32be0f3a266c43cdd5f0a8833f8e8dfe (image=harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-3, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default) 2026-03-06T13:41:41.879 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:41:41 vm03 bash[125430]: ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-mgr-a 2026-03-06T13:41:41.904 DEBUG:teuthology.orchestra.run.vm03:> sudo pkill -f 'journalctl -f -n 0 -u ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@mgr.a.service' 2026-03-06T13:41:42.360 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:41:41 vm03 systemd[1]: ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@mgr.a.service: Deactivated successfully. 2026-03-06T13:41:42.360 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:41:41 vm03 systemd[1]: Stopped Ceph mgr.a for b4d7b36a-1958-11f1-a2a1-8fd8798eb057. 2026-03-06T13:41:42.360 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 06 13:41:41 vm03 systemd[1]: ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@mgr.a.service: Consumed 26.509s CPU time. 2026-03-06T13:41:42.413 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-06T13:41:42.413 INFO:tasks.cephadm.mgr.a:Stopped mgr.a 2026-03-06T13:41:42.413 INFO:tasks.cephadm.osd.0:Stopping osd.0... 2026-03-06T13:41:42.413 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@osd.0 2026-03-06T13:41:42.860 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:42 vm03 systemd[1]: Stopping Ceph osd.0 for b4d7b36a-1958-11f1-a2a1-8fd8798eb057... 2026-03-06T13:41:42.860 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:42 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0[119920]: 2026-03-06T12:41:42.602+0000 7f6bfd5f1640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-06T13:41:42.860 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:42 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0[119920]: 2026-03-06T12:41:42.602+0000 7f6bfd5f1640 -1 osd.0 48 *** Got signal Terminated *** 2026-03-06T13:41:42.860 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:42 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0[119920]: 2026-03-06T12:41:42.602+0000 7f6bfd5f1640 -1 osd.0 48 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-06T13:41:47.939 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:47 vm03 podman[125635]: 2026-03-06 13:41:47.642755756 +0100 CET m=+5.071601181 container died e64314a32f8a2862cc5326ad994debb85b77594e4b213a717f384ada41483cef (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552) 2026-03-06T13:41:47.940 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:47 vm03 podman[125635]: 2026-03-06 13:41:47.776990421 +0100 CET m=+5.205835846 container remove e64314a32f8a2862cc5326ad994debb85b77594e4b213a717f384ada41483cef (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8) 2026-03-06T13:41:47.940 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:47 vm03 bash[125635]: ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0 2026-03-06T13:41:48.360 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:47 vm03 podman[125716]: 2026-03-06 13:41:47.938220333 +0100 CET m=+0.015807066 container create d8efd408f2776875cafd0185e2207e69a2e6a284301e3ba6ed41231d4ac476e4 (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0-deactivate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9) 2026-03-06T13:41:48.360 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:47 vm03 podman[125716]: 2026-03-06 13:41:47.993683368 +0100 CET m=+0.071270101 container init d8efd408f2776875cafd0185e2207e69a2e6a284301e3ba6ed41231d4ac476e4 (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0-deactivate, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8) 2026-03-06T13:41:48.360 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:48 vm03 podman[125716]: 2026-03-06 13:41:48.000102994 +0100 CET m=+0.077689736 container start d8efd408f2776875cafd0185e2207e69a2e6a284301e3ba6ed41231d4ac476e4 (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0-deactivate, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8) 2026-03-06T13:41:48.360 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:48 vm03 podman[125716]: 2026-03-06 13:41:48.003624635 +0100 CET m=+0.081211378 container attach d8efd408f2776875cafd0185e2207e69a2e6a284301e3ba6ed41231d4ac476e4 (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0-deactivate, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8) 2026-03-06T13:41:48.360 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:48 vm03 podman[125716]: 2026-03-06 13:41:47.931762887 +0100 CET m=+0.009349630 image pull 306e97de47e91c2b4b24d3dc09be3b3a12039b078f343d91220102acc6628a68 harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b 2026-03-06T13:41:48.587 DEBUG:teuthology.orchestra.run.vm03:> sudo pkill -f 'journalctl -f -n 0 -u ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@osd.0.service' 2026-03-06T13:41:48.860 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:48 vm03 podman[125716]: 2026-03-06 13:41:48.428734188 +0100 CET m=+0.506320921 container died d8efd408f2776875cafd0185e2207e69a2e6a284301e3ba6ed41231d4ac476e4 (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0-deactivate, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8) 2026-03-06T13:41:48.860 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:48 vm03 podman[125716]: 2026-03-06 13:41:48.564921539 +0100 CET m=+0.642508272 container remove d8efd408f2776875cafd0185e2207e69a2e6a284301e3ba6ed41231d4ac476e4 (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-0-deactivate, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8) 2026-03-06T13:41:48.860 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:48 vm03 systemd[1]: ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@osd.0.service: Deactivated successfully. 2026-03-06T13:41:48.861 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:48 vm03 systemd[1]: Stopped Ceph osd.0 for b4d7b36a-1958-11f1-a2a1-8fd8798eb057. 2026-03-06T13:41:48.861 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 06 13:41:48 vm03 systemd[1]: ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@osd.0.service: Consumed 1.134s CPU time. 2026-03-06T13:41:49.061 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-06T13:41:49.061 INFO:tasks.cephadm.osd.0:Stopped osd.0 2026-03-06T13:41:49.061 INFO:tasks.cephadm.osd.1:Stopping osd.1... 2026-03-06T13:41:49.061 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@osd.1 2026-03-06T13:41:49.360 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:49 vm03 systemd[1]: Stopping Ceph osd.1 for b4d7b36a-1958-11f1-a2a1-8fd8798eb057... 2026-03-06T13:41:49.360 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:49 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1[116512]: 2026-03-06T12:41:49.262+0000 7f4a00e50640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.1 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-06T13:41:49.360 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:49 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1[116512]: 2026-03-06T12:41:49.262+0000 7f4a00e50640 -1 osd.1 48 *** Got signal Terminated *** 2026-03-06T13:41:49.360 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:49 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1[116512]: 2026-03-06T12:41:49.262+0000 7f4a00e50640 -1 osd.1 48 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-06T13:41:54.610 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:54 vm03 podman[125836]: 2026-03-06 13:41:54.290240511 +0100 CET m=+5.065584461 container died a79630f11d1582b6ba8424b3539a194afb1355ff8a9aad3ffcdf2d37e0af504a (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552) 2026-03-06T13:41:54.610 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:54 vm03 podman[125836]: 2026-03-06 13:41:54.435093631 +0100 CET m=+5.210437571 container remove a79630f11d1582b6ba8424b3539a194afb1355ff8a9aad3ffcdf2d37e0af504a (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552) 2026-03-06T13:41:54.610 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:54 vm03 bash[125836]: ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1 2026-03-06T13:41:55.110 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:54 vm03 podman[125917]: 2026-03-06 13:41:54.628124251 +0100 CET m=+0.022372224 container create 31dae3d801fd3a2cc884f6c3729f54ae173c499fb305c1ff2570bba58c137ad6 (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1-deactivate, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2) 2026-03-06T13:41:55.110 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:54 vm03 podman[125917]: 2026-03-06 13:41:54.685097333 +0100 CET m=+0.079345306 container init 31dae3d801fd3a2cc884f6c3729f54ae173c499fb305c1ff2570bba58c137ad6 (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1-deactivate, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git) 2026-03-06T13:41:55.110 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:54 vm03 podman[125917]: 2026-03-06 13:41:54.702569184 +0100 CET m=+0.096817157 container start 31dae3d801fd3a2cc884f6c3729f54ae173c499fb305c1ff2570bba58c137ad6 (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1-deactivate, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552) 2026-03-06T13:41:55.110 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:54 vm03 podman[125917]: 2026-03-06 13:41:54.704236203 +0100 CET m=+0.098484186 container attach 31dae3d801fd3a2cc884f6c3729f54ae173c499fb305c1ff2570bba58c137ad6 (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1-deactivate, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default) 2026-03-06T13:41:55.110 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:54 vm03 podman[125917]: 2026-03-06 13:41:54.620271343 +0100 CET m=+0.014519316 image pull 306e97de47e91c2b4b24d3dc09be3b3a12039b078f343d91220102acc6628a68 harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b 2026-03-06T13:41:55.268 DEBUG:teuthology.orchestra.run.vm03:> sudo pkill -f 'journalctl -f -n 0 -u ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@osd.1.service' 2026-03-06T13:41:55.610 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:55 vm03 podman[125917]: 2026-03-06 13:41:55.119852179 +0100 CET m=+0.514100152 container died 31dae3d801fd3a2cc884f6c3729f54ae173c499fb305c1ff2570bba58c137ad6 (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1-deactivate, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552) 2026-03-06T13:41:55.610 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:55 vm03 podman[125917]: 2026-03-06 13:41:55.251704465 +0100 CET m=+0.645952438 container remove 31dae3d801fd3a2cc884f6c3729f54ae173c499fb305c1ff2570bba58c137ad6 (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-1-deactivate, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8) 2026-03-06T13:41:55.610 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:55 vm03 systemd[1]: ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@osd.1.service: Deactivated successfully. 2026-03-06T13:41:55.610 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:55 vm03 systemd[1]: Stopped Ceph osd.1 for b4d7b36a-1958-11f1-a2a1-8fd8798eb057. 2026-03-06T13:41:55.610 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 06 13:41:55 vm03 systemd[1]: ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@osd.1.service: Consumed 1.308s CPU time. 2026-03-06T13:41:55.739 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-06T13:41:55.739 INFO:tasks.cephadm.osd.1:Stopped osd.1 2026-03-06T13:41:55.739 INFO:tasks.cephadm.osd.2:Stopping osd.2... 2026-03-06T13:41:55.739 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@osd.2 2026-03-06T13:41:56.110 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:55 vm03 systemd[1]: Stopping Ceph osd.2 for b4d7b36a-1958-11f1-a2a1-8fd8798eb057... 2026-03-06T13:41:56.110 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:55 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2[119223]: 2026-03-06T12:41:55.892+0000 7f355b68a640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.2 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-06T13:41:56.110 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:55 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2[119223]: 2026-03-06T12:41:55.892+0000 7f355b68a640 -1 osd.2 48 *** Got signal Terminated *** 2026-03-06T13:41:56.110 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:41:55 vm03 ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2[119223]: 2026-03-06T12:41:55.892+0000 7f355b68a640 -1 osd.2 48 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-06T13:42:01.211 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:42:00 vm03 podman[126037]: 2026-03-06 13:42:00.925849359 +0100 CET m=+5.051237357 container died 58316ab89fcac138d40e58c11ca22d78645ef8364df7214d24330d59109bcbb0 (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2) 2026-03-06T13:42:01.211 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:42:01 vm03 podman[126037]: 2026-03-06 13:42:01.047423308 +0100 CET m=+5.172811306 container remove 58316ab89fcac138d40e58c11ca22d78645ef8364df7214d24330d59109bcbb0 (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True) 2026-03-06T13:42:01.211 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:42:01 vm03 bash[126037]: ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2 2026-03-06T13:42:01.610 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:42:01 vm03 podman[126115]: 2026-03-06 13:42:01.209793243 +0100 CET m=+0.015485203 container create a5b05dab81251a89fcf273c5beb0517f770d22abe8689ae640866cb66e02dd7a (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2-deactivate, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-06T13:42:01.610 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:42:01 vm03 podman[126115]: 2026-03-06 13:42:01.258705237 +0100 CET m=+0.064397217 container init a5b05dab81251a89fcf273c5beb0517f770d22abe8689ae640866cb66e02dd7a (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2-deactivate, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True) 2026-03-06T13:42:01.610 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:42:01 vm03 podman[126115]: 2026-03-06 13:42:01.267675376 +0100 CET m=+0.073367346 container start a5b05dab81251a89fcf273c5beb0517f770d22abe8689ae640866cb66e02dd7a (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2-deactivate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9) 2026-03-06T13:42:01.610 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:42:01 vm03 podman[126115]: 2026-03-06 13:42:01.268838072 +0100 CET m=+0.074530042 container attach a5b05dab81251a89fcf273c5beb0517f770d22abe8689ae640866cb66e02dd7a (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2-deactivate, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-06T13:42:01.610 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:42:01 vm03 podman[126115]: 2026-03-06 13:42:01.203069539 +0100 CET m=+0.008761519 image pull 306e97de47e91c2b4b24d3dc09be3b3a12039b078f343d91220102acc6628a68 harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b 2026-03-06T13:42:01.956 DEBUG:teuthology.orchestra.run.vm03:> sudo pkill -f 'journalctl -f -n 0 -u ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@osd.2.service' 2026-03-06T13:42:01.986 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:42:01 vm03 podman[126115]: 2026-03-06 13:42:01.711842639 +0100 CET m=+0.517534609 container died a5b05dab81251a89fcf273c5beb0517f770d22abe8689ae640866cb66e02dd7a (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2-deactivate, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2) 2026-03-06T13:42:01.987 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:42:01 vm03 podman[126115]: 2026-03-06 13:42:01.927762048 +0100 CET m=+0.733454018 container remove a5b05dab81251a89fcf273c5beb0517f770d22abe8689ae640866cb66e02dd7a (image=harbor.clyso.com/custom-ceph/ceph/ceph@sha256:26363c7a4eea9ef5a0148afc7b2a22b6f486596d87a30c2a9fdcda5db3eca62b, name=ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057-osd-2-deactivate, CEPH_REF=19.2.3-47-gc24117fd552, CEPH_SHA1=c24117fd5525679b799527bc1bd1f1dd0a2db5e2, FROM_IMAGE=rockylinux:9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.8, CEPH_GIT_REPO=https://github.com/irq0/ceph.git) 2026-03-06T13:42:01.987 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:42:01 vm03 systemd[1]: ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@osd.2.service: Deactivated successfully. 2026-03-06T13:42:01.987 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:42:01 vm03 systemd[1]: Stopped Ceph osd.2 for b4d7b36a-1958-11f1-a2a1-8fd8798eb057. 2026-03-06T13:42:01.987 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 06 13:42:01 vm03 systemd[1]: ceph-b4d7b36a-1958-11f1-a2a1-8fd8798eb057@osd.2.service: Consumed 1.194s CPU time. 2026-03-06T13:42:02.434 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-06T13:42:02.434 INFO:tasks.cephadm.osd.2:Stopped osd.2 2026-03-06T13:42:02.434 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid b4d7b36a-1958-11f1-a2a1-8fd8798eb057 --force --keep-logs 2026-03-06T13:42:02.793 INFO:teuthology.orchestra.run.vm03.stdout:Deleting cluster with fsid: b4d7b36a-1958-11f1-a2a1-8fd8798eb057 2026-03-06T13:42:16.795 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-06T13:42:16.830 INFO:tasks.cephadm:Archiving crash dumps... 2026-03-06T13:42:16.830 DEBUG:teuthology.misc:Transferring archived files from vm03:/var/lib/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/crash to /archive/irq0-2026-03-06_13:20:18-orch:cephadm:workunits-cobaltcore-storage-v19.2.3-fasttrack-3-none-default-vps/271/remote/vm03/crash 2026-03-06T13:42:16.830 DEBUG:teuthology.orchestra.run.vm03:> sudo tar c -f - -C /var/lib/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/crash -- . 2026-03-06T13:42:16.903 INFO:teuthology.orchestra.run.vm03.stderr:tar: /var/lib/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/crash: Cannot open: No such file or directory 2026-03-06T13:42:16.903 INFO:teuthology.orchestra.run.vm03.stderr:tar: Error is not recoverable: exiting now 2026-03-06T13:42:16.904 INFO:tasks.cephadm:Checking cluster log for badness... 2026-03-06T13:42:16.904 DEBUG:teuthology.orchestra.run.vm03:> sudo egrep '\[ERR\]|\[WRN\]|\[SEC\]' /var/log/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/ceph.log | egrep CEPHADM_ | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v CEPHADM_FAILED_DAEMON | head -n 1 2026-03-06T13:42:16.977 INFO:tasks.cephadm:Compressing logs... 2026-03-06T13:42:16.977 DEBUG:teuthology.orchestra.run.vm03:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-06T13:42:17.047 INFO:teuthology.orchestra.run.vm03.stderr:find: gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-06T13:42:17.047 INFO:teuthology.orchestra.run.vm03.stderr:‘/var/log/rbd-target-api’: No such file or directory 2026-03-06T13:42:17.049 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/ceph-mon.a.log 2026-03-06T13:42:17.049 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/ceph.log 2026-03-06T13:42:17.051 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/cephadm.log: /var/log/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/ceph-mon.a.log: 90.2% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-06T13:42:17.051 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/ceph.audit.log 2026-03-06T13:42:17.051 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/ceph.log: 84.5% -- replaced with /var/log/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/ceph.log.gz 2026-03-06T13:42:17.052 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/ceph-mgr.a.log 2026-03-06T13:42:17.053 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/ceph.audit.log: 89.0% -- replaced with /var/log/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/ceph.audit.log.gz 2026-03-06T13:42:17.053 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/ceph.cephadm.log 2026-03-06T13:42:17.058 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/ceph-mgr.a.log: gzip -5 --verbose -- /var/log/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/ceph-volume.log 2026-03-06T13:42:17.062 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/ceph.cephadm.log: 76.1% -- replaced with /var/log/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/ceph.cephadm.log.gz 2026-03-06T13:42:17.064 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/ceph-osd.0.log 2026-03-06T13:42:17.076 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/ceph-volume.log: gzip -5 --verbose -- /var/log/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/ceph-osd.1.log 2026-03-06T13:42:17.086 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/ceph-osd.0.log: gzip -5 --verbose -- /var/log/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/ceph-osd.2.log 2026-03-06T13:42:17.090 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/ceph-osd.1.log: gzip -5 --verbose -- /var/log/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/tcmu-runner.log 2026-03-06T13:42:17.107 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/ceph-osd.2.log: /var/log/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/tcmu-runner.log: 63.7% -- replaced with /var/log/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/tcmu-runner.log.gz 2026-03-06T13:42:17.121 INFO:teuthology.orchestra.run.vm03.stderr: 89.1% 93.3% -- replaced with /var/log/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/ceph-volume.log.gz 2026-03-06T13:42:17.121 INFO:teuthology.orchestra.run.vm03.stderr: -- replaced with /var/log/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/ceph-mgr.a.log.gz 2026-03-06T13:42:17.190 INFO:teuthology.orchestra.run.vm03.stderr: 95.0% -- replaced with /var/log/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/ceph-osd.0.log.gz 2026-03-06T13:42:17.190 INFO:teuthology.orchestra.run.vm03.stderr: 91.3% -- replaced with /var/log/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/ceph-mon.a.log.gz 2026-03-06T13:42:17.243 INFO:teuthology.orchestra.run.vm03.stderr: 94.9% -- replaced with /var/log/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/ceph-osd.2.log.gz 2026-03-06T13:42:17.272 INFO:teuthology.orchestra.run.vm03.stderr: 95.1% -- replaced with /var/log/ceph/b4d7b36a-1958-11f1-a2a1-8fd8798eb057/ceph-osd.1.log.gz 2026-03-06T13:42:17.274 INFO:teuthology.orchestra.run.vm03.stderr: 2026-03-06T13:42:17.274 INFO:teuthology.orchestra.run.vm03.stderr:real 0m0.238s 2026-03-06T13:42:17.274 INFO:teuthology.orchestra.run.vm03.stderr:user 0m0.389s 2026-03-06T13:42:17.274 INFO:teuthology.orchestra.run.vm03.stderr:sys 0m0.045s 2026-03-06T13:42:17.274 INFO:tasks.cephadm:Archiving logs... 2026-03-06T13:42:17.275 DEBUG:teuthology.misc:Transferring archived files from vm03:/var/log/ceph to /archive/irq0-2026-03-06_13:20:18-orch:cephadm:workunits-cobaltcore-storage-v19.2.3-fasttrack-3-none-default-vps/271/remote/vm03/log 2026-03-06T13:42:17.275 DEBUG:teuthology.orchestra.run.vm03:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-06T13:42:17.371 INFO:tasks.cephadm:Removing cluster... 2026-03-06T13:42:17.371 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid b4d7b36a-1958-11f1-a2a1-8fd8798eb057 --force 2026-03-06T13:42:17.717 INFO:teuthology.orchestra.run.vm03.stdout:Deleting cluster with fsid: b4d7b36a-1958-11f1-a2a1-8fd8798eb057 2026-03-06T13:42:17.966 INFO:tasks.cephadm:Removing cephadm ... 2026-03-06T13:42:17.966 DEBUG:teuthology.orchestra.run.vm03:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-06T13:42:17.984 INFO:tasks.cephadm:Teardown complete 2026-03-06T13:42:17.984 DEBUG:teuthology.run_tasks:Unwinding manager install 2026-03-06T13:42:17.986 INFO:teuthology.task.install.util:Removing shipped files: /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer... 2026-03-06T13:42:17.986 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-06T13:42:18.060 INFO:teuthology.task.install.rpm:Removing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd on rpm system. 2026-03-06T13:42:18.060 DEBUG:teuthology.orchestra.run.vm03:> 2026-03-06T13:42:18.060 DEBUG:teuthology.orchestra.run.vm03:> for d in ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd ; do 2026-03-06T13:42:18.060 DEBUG:teuthology.orchestra.run.vm03:> sudo yum -y remove $d || true 2026-03-06T13:42:18.060 DEBUG:teuthology.orchestra.run.vm03:> done 2026-03-06T13:42:18.416 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-06T13:42:18.417 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-06T13:42:18.417 INFO:teuthology.orchestra.run.vm03.stdout: Package Arch Version Repo Size 2026-03-06T13:42:18.417 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-06T13:42:18.417 INFO:teuthology.orchestra.run.vm03.stdout:Removing: 2026-03-06T13:42:18.417 INFO:teuthology.orchestra.run.vm03.stdout: ceph-radosgw x86_64 2:19.2.3-47.gc24117fd552.el9.clyso @ceph 39 M 2026-03-06T13:42:18.417 INFO:teuthology.orchestra.run.vm03.stdout:Removing unused dependencies: 2026-03-06T13:42:18.417 INFO:teuthology.orchestra.run.vm03.stdout: mailcap noarch 2.1.49-5.el9 @baseos 78 k 2026-03-06T13:42:18.417 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:42:18.417 INFO:teuthology.orchestra.run.vm03.stdout:Transaction Summary 2026-03-06T13:42:18.417 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-06T13:42:18.417 INFO:teuthology.orchestra.run.vm03.stdout:Remove 2 Packages 2026-03-06T13:42:18.417 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:42:18.417 INFO:teuthology.orchestra.run.vm03.stdout:Freed space: 39 M 2026-03-06T13:42:18.417 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction check 2026-03-06T13:42:18.420 INFO:teuthology.orchestra.run.vm03.stdout:Transaction check succeeded. 2026-03-06T13:42:18.420 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction test 2026-03-06T13:42:18.450 INFO:teuthology.orchestra.run.vm03.stdout:Transaction test succeeded. 2026-03-06T13:42:18.450 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction 2026-03-06T13:42:18.484 INFO:teuthology.orchestra.run.vm03.stdout: Preparing : 1/1 2026-03-06T13:42:18.506 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-47.gc24117fd552.el9.clyso.x86_ 1/2 2026-03-06T13:42:18.506 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-06T13:42:18.506 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-06T13:42:18.506 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-radosgw.target". 2026-03-06T13:42:18.506 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-radosgw.target". 2026-03-06T13:42:18.506 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:42:18.508 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-radosgw-2:19.2.3-47.gc24117fd552.el9.clyso.x86_ 1/2 2026-03-06T13:42:18.516 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-47.gc24117fd552.el9.clyso.x86_ 1/2 2026-03-06T13:42:18.531 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-06T13:42:18.602 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: mailcap-2.1.49-5.el9.noarch 2/2 2026-03-06T13:42:18.602 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-radosgw-2:19.2.3-47.gc24117fd552.el9.clyso.x86_ 1/2 2026-03-06T13:42:18.649 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-06T13:42:18.649 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:42:18.649 INFO:teuthology.orchestra.run.vm03.stdout:Removed: 2026-03-06T13:42:18.649 INFO:teuthology.orchestra.run.vm03.stdout: ceph-radosgw-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:42:18.649 INFO:teuthology.orchestra.run.vm03.stdout: mailcap-2.1.49-5.el9.noarch 2026-03-06T13:42:18.649 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:42:18.649 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-06T13:42:18.877 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-06T13:42:18.877 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-06T13:42:18.877 INFO:teuthology.orchestra.run.vm03.stdout: Package Arch Version Repository Size 2026-03-06T13:42:18.877 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-06T13:42:18.877 INFO:teuthology.orchestra.run.vm03.stdout:Removing: 2026-03-06T13:42:18.877 INFO:teuthology.orchestra.run.vm03.stdout: ceph-test x86_64 2:19.2.3-47.gc24117fd552.el9.clyso @ceph 210 M 2026-03-06T13:42:18.877 INFO:teuthology.orchestra.run.vm03.stdout:Removing unused dependencies: 2026-03-06T13:42:18.878 INFO:teuthology.orchestra.run.vm03.stdout: socat x86_64 1.7.4.1-8.el9 @appstream 1.1 M 2026-03-06T13:42:18.878 INFO:teuthology.orchestra.run.vm03.stdout: xmlstarlet x86_64 1.6.1-20.el9 @appstream 195 k 2026-03-06T13:42:18.878 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:42:18.878 INFO:teuthology.orchestra.run.vm03.stdout:Transaction Summary 2026-03-06T13:42:18.878 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-06T13:42:18.878 INFO:teuthology.orchestra.run.vm03.stdout:Remove 3 Packages 2026-03-06T13:42:18.878 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:42:18.878 INFO:teuthology.orchestra.run.vm03.stdout:Freed space: 211 M 2026-03-06T13:42:18.878 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction check 2026-03-06T13:42:18.881 INFO:teuthology.orchestra.run.vm03.stdout:Transaction check succeeded. 2026-03-06T13:42:18.881 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction test 2026-03-06T13:42:18.914 INFO:teuthology.orchestra.run.vm03.stdout:Transaction test succeeded. 2026-03-06T13:42:18.914 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction 2026-03-06T13:42:18.971 INFO:teuthology.orchestra.run.vm03.stdout: Preparing : 1/1 2026-03-06T13:42:18.978 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-test-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 1/3 2026-03-06T13:42:18.981 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : socat-1.7.4.1-8.el9.x86_64 2/3 2026-03-06T13:42:18.996 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : xmlstarlet-1.6.1-20.el9.x86_64 3/3 2026-03-06T13:42:19.063 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: xmlstarlet-1.6.1-20.el9.x86_64 3/3 2026-03-06T13:42:19.063 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-test-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 1/3 2026-03-06T13:42:19.063 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 2/3 2026-03-06T13:42:19.117 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 3/3 2026-03-06T13:42:19.117 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:42:19.117 INFO:teuthology.orchestra.run.vm03.stdout:Removed: 2026-03-06T13:42:19.118 INFO:teuthology.orchestra.run.vm03.stdout: ceph-test-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:42:19.118 INFO:teuthology.orchestra.run.vm03.stdout: socat-1.7.4.1-8.el9.x86_64 2026-03-06T13:42:19.118 INFO:teuthology.orchestra.run.vm03.stdout: xmlstarlet-1.6.1-20.el9.x86_64 2026-03-06T13:42:19.118 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:42:19.118 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-06T13:42:19.338 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-06T13:42:19.339 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-06T13:42:19.339 INFO:teuthology.orchestra.run.vm03.stdout: Package Arch Version Repository Size 2026-03-06T13:42:19.339 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-06T13:42:19.339 INFO:teuthology.orchestra.run.vm03.stdout:Removing: 2026-03-06T13:42:19.339 INFO:teuthology.orchestra.run.vm03.stdout: ceph x86_64 2:19.2.3-47.gc24117fd552.el9.clyso @ceph 0 2026-03-06T13:42:19.339 INFO:teuthology.orchestra.run.vm03.stdout:Removing unused dependencies: 2026-03-06T13:42:19.339 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mds x86_64 2:19.2.3-47.gc24117fd552.el9.clyso @ceph 7.4 M 2026-03-06T13:42:19.339 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mon x86_64 2:19.2.3-47.gc24117fd552.el9.clyso @ceph 18 M 2026-03-06T13:42:19.339 INFO:teuthology.orchestra.run.vm03.stdout: lua x86_64 5.4.4-4.el9 @appstream 593 k 2026-03-06T13:42:19.339 INFO:teuthology.orchestra.run.vm03.stdout: lua-devel x86_64 5.4.4-4.el9 @crb 49 k 2026-03-06T13:42:19.339 INFO:teuthology.orchestra.run.vm03.stdout: luarocks noarch 3.9.2-5.el9 @epel 692 k 2026-03-06T13:42:19.339 INFO:teuthology.orchestra.run.vm03.stdout: unzip x86_64 6.0-59.el9 @baseos 389 k 2026-03-06T13:42:19.339 INFO:teuthology.orchestra.run.vm03.stdout: zip x86_64 3.0-35.el9 @baseos 724 k 2026-03-06T13:42:19.339 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:42:19.339 INFO:teuthology.orchestra.run.vm03.stdout:Transaction Summary 2026-03-06T13:42:19.339 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-06T13:42:19.339 INFO:teuthology.orchestra.run.vm03.stdout:Remove 8 Packages 2026-03-06T13:42:19.339 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:42:19.339 INFO:teuthology.orchestra.run.vm03.stdout:Freed space: 28 M 2026-03-06T13:42:19.339 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction check 2026-03-06T13:42:19.342 INFO:teuthology.orchestra.run.vm03.stdout:Transaction check succeeded. 2026-03-06T13:42:19.342 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction test 2026-03-06T13:42:19.376 INFO:teuthology.orchestra.run.vm03.stdout:Transaction test succeeded. 2026-03-06T13:42:19.376 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction 2026-03-06T13:42:19.421 INFO:teuthology.orchestra.run.vm03.stdout: Preparing : 1/1 2026-03-06T13:42:19.428 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 1/8 2026-03-06T13:42:19.431 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : luarocks-3.9.2-5.el9.noarch 2/8 2026-03-06T13:42:19.434 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : lua-devel-5.4.4-4.el9.x86_64 3/8 2026-03-06T13:42:19.437 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : zip-3.0-35.el9.x86_64 4/8 2026-03-06T13:42:19.440 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : unzip-6.0-59.el9.x86_64 5/8 2026-03-06T13:42:19.442 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : lua-5.4.4-4.el9.x86_64 6/8 2026-03-06T13:42:19.471 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mds-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 7/8 2026-03-06T13:42:19.472 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-06T13:42:19.472 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-06T13:42:19.472 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mds.target". 2026-03-06T13:42:19.472 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mds.target". 2026-03-06T13:42:19.472 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:42:19.472 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-mds-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 7/8 2026-03-06T13:42:19.481 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mds-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 7/8 2026-03-06T13:42:19.504 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mon-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 8/8 2026-03-06T13:42:19.504 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-06T13:42:19.504 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-06T13:42:19.504 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mon.target". 2026-03-06T13:42:19.504 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mon.target". 2026-03-06T13:42:19.504 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:42:19.505 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-mon-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 8/8 2026-03-06T13:42:19.613 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mon-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 8/8 2026-03-06T13:42:19.613 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 1/8 2026-03-06T13:42:19.614 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mds-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2/8 2026-03-06T13:42:19.614 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mon-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 3/8 2026-03-06T13:42:19.614 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : lua-5.4.4-4.el9.x86_64 4/8 2026-03-06T13:42:19.614 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 5/8 2026-03-06T13:42:19.614 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 6/8 2026-03-06T13:42:19.614 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : unzip-6.0-59.el9.x86_64 7/8 2026-03-06T13:42:19.673 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : zip-3.0-35.el9.x86_64 8/8 2026-03-06T13:42:19.673 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:42:19.673 INFO:teuthology.orchestra.run.vm03.stdout:Removed: 2026-03-06T13:42:19.673 INFO:teuthology.orchestra.run.vm03.stdout: ceph-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:42:19.673 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mds-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:42:19.673 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mon-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:42:19.673 INFO:teuthology.orchestra.run.vm03.stdout: lua-5.4.4-4.el9.x86_64 2026-03-06T13:42:19.673 INFO:teuthology.orchestra.run.vm03.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-06T13:42:19.673 INFO:teuthology.orchestra.run.vm03.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-06T13:42:19.673 INFO:teuthology.orchestra.run.vm03.stdout: unzip-6.0-59.el9.x86_64 2026-03-06T13:42:19.673 INFO:teuthology.orchestra.run.vm03.stdout: zip-3.0-35.el9.x86_64 2026-03-06T13:42:19.673 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:42:19.673 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-06T13:42:19.886 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-06T13:42:19.895 INFO:teuthology.orchestra.run.vm03.stdout:=================================================================================================== 2026-03-06T13:42:19.895 INFO:teuthology.orchestra.run.vm03.stdout: Package Arch Version Repository Size 2026-03-06T13:42:19.895 INFO:teuthology.orchestra.run.vm03.stdout:=================================================================================================== 2026-03-06T13:42:19.895 INFO:teuthology.orchestra.run.vm03.stdout:Removing: 2026-03-06T13:42:19.895 INFO:teuthology.orchestra.run.vm03.stdout: ceph-base x86_64 2:19.2.3-47.gc24117fd552.el9.clyso @ceph 23 M 2026-03-06T13:42:19.895 INFO:teuthology.orchestra.run.vm03.stdout:Removing dependent packages: 2026-03-06T13:42:19.895 INFO:teuthology.orchestra.run.vm03.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-47.gc24117fd552.el9.clyso @ceph 431 k 2026-03-06T13:42:19.895 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr x86_64 2:19.2.3-47.gc24117fd552.el9.clyso @ceph 3.4 M 2026-03-06T13:42:19.895 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-cephadm noarch 2:19.2.3-47.gc24117fd552.el9.clyso @ceph-noarch 803 k 2026-03-06T13:42:19.895 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-dashboard noarch 2:19.2.3-47.gc24117fd552.el9.clyso @ceph-noarch 88 M 2026-03-06T13:42:19.895 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-47.gc24117fd552.el9.clyso @ceph-noarch 66 M 2026-03-06T13:42:19.895 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-rook noarch 2:19.2.3-47.gc24117fd552.el9.clyso @ceph-noarch 563 k 2026-03-06T13:42:19.895 INFO:teuthology.orchestra.run.vm03.stdout: ceph-osd x86_64 2:19.2.3-47.gc24117fd552.el9.clyso @ceph 59 M 2026-03-06T13:42:19.895 INFO:teuthology.orchestra.run.vm03.stdout: ceph-volume noarch 2:19.2.3-47.gc24117fd552.el9.clyso @ceph-noarch 1.4 M 2026-03-06T13:42:19.895 INFO:teuthology.orchestra.run.vm03.stdout: rbd-mirror x86_64 2:19.2.3-47.gc24117fd552.el9.clyso @ceph 13 M 2026-03-06T13:42:19.895 INFO:teuthology.orchestra.run.vm03.stdout:Removing unused dependencies: 2026-03-06T13:42:19.895 INFO:teuthology.orchestra.run.vm03.stdout: abseil-cpp x86_64 20211102.0-4.el9 @epel 1.9 M 2026-03-06T13:42:19.895 INFO:teuthology.orchestra.run.vm03.stdout: ceph-common x86_64 2:19.2.3-47.gc24117fd552.el9.clyso @ceph 85 M 2026-03-06T13:42:19.895 INFO:teuthology.orchestra.run.vm03.stdout: ceph-grafana-dashboards noarch 2:19.2.3-47.gc24117fd552.el9.clyso @ceph-noarch 626 k 2026-03-06T13:42:19.895 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-k8sevents noarch 2:19.2.3-47.gc24117fd552.el9.clyso @ceph-noarch 60 k 2026-03-06T13:42:19.896 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core noarch 2:19.2.3-47.gc24117fd552.el9.clyso @ceph-noarch 1.5 M 2026-03-06T13:42:19.896 INFO:teuthology.orchestra.run.vm03.stdout: ceph-prometheus-alerts noarch 2:19.2.3-47.gc24117fd552.el9.clyso @ceph-noarch 51 k 2026-03-06T13:42:19.896 INFO:teuthology.orchestra.run.vm03.stdout: ceph-selinux x86_64 2:19.2.3-47.gc24117fd552.el9.clyso @ceph 138 k 2026-03-06T13:42:19.896 INFO:teuthology.orchestra.run.vm03.stdout: cryptsetup x86_64 2.8.1-3.el9 @baseos 770 k 2026-03-06T13:42:19.896 INFO:teuthology.orchestra.run.vm03.stdout: flexiblas x86_64 3.0.4-9.el9 @appstream 68 k 2026-03-06T13:42:19.896 INFO:teuthology.orchestra.run.vm03.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 @appstream 11 M 2026-03-06T13:42:19.896 INFO:teuthology.orchestra.run.vm03.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 @appstream 39 k 2026-03-06T13:42:19.896 INFO:teuthology.orchestra.run.vm03.stdout: gperftools-libs x86_64 2.9.1-3.el9 @epel 1.4 M 2026-03-06T13:42:19.896 INFO:teuthology.orchestra.run.vm03.stdout: grpc-data noarch 1.46.7-10.el9 @epel 13 k 2026-03-06T13:42:19.896 INFO:teuthology.orchestra.run.vm03.stdout: ledmon-libs x86_64 1.1.0-3.el9 @baseos 80 k 2026-03-06T13:42:19.896 INFO:teuthology.orchestra.run.vm03.stdout: libcephsqlite x86_64 2:19.2.3-47.gc24117fd552.el9.clyso @ceph 425 k 2026-03-06T13:42:19.896 INFO:teuthology.orchestra.run.vm03.stdout: libconfig x86_64 1.7.2-9.el9 @baseos 220 k 2026-03-06T13:42:19.896 INFO:teuthology.orchestra.run.vm03.stdout: libgfortran x86_64 11.5.0-14.el9 @baseos 2.8 M 2026-03-06T13:42:19.896 INFO:teuthology.orchestra.run.vm03.stdout: liboath x86_64 2.6.12-1.el9 @epel 94 k 2026-03-06T13:42:19.896 INFO:teuthology.orchestra.run.vm03.stdout: libquadmath x86_64 11.5.0-14.el9 @baseos 330 k 2026-03-06T13:42:19.896 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1 x86_64 2:19.2.3-47.gc24117fd552.el9.clyso @ceph 1.6 M 2026-03-06T13:42:19.896 INFO:teuthology.orchestra.run.vm03.stdout: libstoragemgmt x86_64 1.10.1-1.el9 @appstream 685 k 2026-03-06T13:42:19.896 INFO:teuthology.orchestra.run.vm03.stdout: libunwind x86_64 1.6.2-1.el9 @epel 170 k 2026-03-06T13:42:19.896 INFO:teuthology.orchestra.run.vm03.stdout: libxslt x86_64 1.1.34-12.el9 @appstream 743 k 2026-03-06T13:42:19.896 INFO:teuthology.orchestra.run.vm03.stdout: openblas x86_64 0.3.29-1.el9 @appstream 112 k 2026-03-06T13:42:19.896 INFO:teuthology.orchestra.run.vm03.stdout: openblas-openmp x86_64 0.3.29-1.el9 @appstream 46 M 2026-03-06T13:42:19.896 INFO:teuthology.orchestra.run.vm03.stdout: pciutils x86_64 3.7.0-7.el9 @baseos 216 k 2026-03-06T13:42:19.896 INFO:teuthology.orchestra.run.vm03.stdout: protobuf x86_64 3.14.0-17.el9 @appstream 3.5 M 2026-03-06T13:42:19.896 INFO:teuthology.orchestra.run.vm03.stdout: protobuf-compiler x86_64 3.14.0-17.el9 @crb 2.9 M 2026-03-06T13:42:19.896 INFO:teuthology.orchestra.run.vm03.stdout: python3-asyncssh noarch 2.13.2-5.el9 @epel 3.9 M 2026-03-06T13:42:19.896 INFO:teuthology.orchestra.run.vm03.stdout: python3-autocommand noarch 2.2.2-8.el9 @epel 82 k 2026-03-06T13:42:19.896 INFO:teuthology.orchestra.run.vm03.stdout: python3-babel noarch 2.9.1-2.el9 @appstream 27 M 2026-03-06T13:42:19.896 INFO:teuthology.orchestra.run.vm03.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 @epel 254 k 2026-03-06T13:42:19.896 INFO:teuthology.orchestra.run.vm03.stdout: python3-bcrypt x86_64 3.2.2-1.el9 @epel 87 k 2026-03-06T13:42:19.896 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools noarch 4.2.4-1.el9 @epel 93 k 2026-03-06T13:42:19.896 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-common x86_64 2:19.2.3-47.gc24117fd552.el9.clyso @ceph 702 k 2026-03-06T13:42:19.896 INFO:teuthology.orchestra.run.vm03.stdout: python3-certifi noarch 2023.05.07-4.el9 @epel 6.3 k 2026-03-06T13:42:19.896 INFO:teuthology.orchestra.run.vm03.stdout: python3-cffi x86_64 1.14.5-5.el9 @baseos 1.0 M 2026-03-06T13:42:19.896 INFO:teuthology.orchestra.run.vm03.stdout: python3-chardet noarch 4.0.0-5.el9 @anaconda 1.4 M 2026-03-06T13:42:19.896 INFO:teuthology.orchestra.run.vm03.stdout: python3-cheroot noarch 10.0.1-4.el9 @epel 682 k 2026-03-06T13:42:19.896 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy noarch 18.6.1-2.el9 @epel 1.1 M 2026-03-06T13:42:19.896 INFO:teuthology.orchestra.run.vm03.stdout: python3-cryptography x86_64 36.0.1-5.el9 @baseos 4.5 M 2026-03-06T13:42:19.896 INFO:teuthology.orchestra.run.vm03.stdout: python3-devel x86_64 3.9.25-3.el9 @appstream 765 k 2026-03-06T13:42:19.896 INFO:teuthology.orchestra.run.vm03.stdout: python3-google-auth noarch 1:2.45.0-1.el9 @epel 1.4 M 2026-03-06T13:42:19.896 INFO:teuthology.orchestra.run.vm03.stdout: python3-grpcio x86_64 1.46.7-10.el9 @epel 6.7 M 2026-03-06T13:42:19.896 INFO:teuthology.orchestra.run.vm03.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 @epel 418 k 2026-03-06T13:42:19.896 INFO:teuthology.orchestra.run.vm03.stdout: python3-idna noarch 2.10-7.el9.1 @anaconda 513 k 2026-03-06T13:42:19.896 INFO:teuthology.orchestra.run.vm03.stdout: python3-influxdb noarch 5.3.1-1.el9 @epel 747 k 2026-03-06T13:42:19.896 INFO:teuthology.orchestra.run.vm03.stdout: python3-isodate noarch 0.6.1-3.el9 @epel 203 k 2026-03-06T13:42:19.896 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco noarch 8.2.1-3.el9 @epel 3.7 k 2026-03-06T13:42:19.897 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 @epel 24 k 2026-03-06T13:42:19.897 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 @epel 55 k 2026-03-06T13:42:19.897 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-context noarch 6.0.1-3.el9 @epel 31 k 2026-03-06T13:42:19.897 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 @epel 33 k 2026-03-06T13:42:19.897 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-text noarch 4.0.0-2.el9 @epel 51 k 2026-03-06T13:42:19.897 INFO:teuthology.orchestra.run.vm03.stdout: python3-jinja2 noarch 2.11.3-8.el9 @appstream 1.1 M 2026-03-06T13:42:19.897 INFO:teuthology.orchestra.run.vm03.stdout: python3-jsonpatch noarch 1.21-16.el9 @koji-override-0 55 k 2026-03-06T13:42:19.897 INFO:teuthology.orchestra.run.vm03.stdout: python3-jsonpointer noarch 2.0-4.el9 @koji-override-0 34 k 2026-03-06T13:42:19.897 INFO:teuthology.orchestra.run.vm03.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 @epel 21 M 2026-03-06T13:42:19.897 INFO:teuthology.orchestra.run.vm03.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 @appstream 832 k 2026-03-06T13:42:19.897 INFO:teuthology.orchestra.run.vm03.stdout: python3-logutils noarch 0.3.5-21.el9 @epel 126 k 2026-03-06T13:42:19.897 INFO:teuthology.orchestra.run.vm03.stdout: python3-lxml x86_64 4.6.5-3.el9 @appstream 4.2 M 2026-03-06T13:42:19.897 INFO:teuthology.orchestra.run.vm03.stdout: python3-mako noarch 1.1.4-6.el9 @appstream 534 k 2026-03-06T13:42:19.897 INFO:teuthology.orchestra.run.vm03.stdout: python3-markupsafe x86_64 1.1.1-12.el9 @appstream 60 k 2026-03-06T13:42:19.897 INFO:teuthology.orchestra.run.vm03.stdout: python3-more-itertools noarch 8.12.0-2.el9 @epel 378 k 2026-03-06T13:42:19.897 INFO:teuthology.orchestra.run.vm03.stdout: python3-msgpack x86_64 1.0.3-2.el9 @epel 264 k 2026-03-06T13:42:19.897 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort noarch 7.1.1-5.el9 @epel 215 k 2026-03-06T13:42:19.897 INFO:teuthology.orchestra.run.vm03.stdout: python3-numpy x86_64 1:1.23.5-2.el9 @appstream 30 M 2026-03-06T13:42:19.897 INFO:teuthology.orchestra.run.vm03.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 @appstream 1.7 M 2026-03-06T13:42:19.897 INFO:teuthology.orchestra.run.vm03.stdout: python3-oauthlib noarch 3.1.1-5.el9 @koji-override-0 888 k 2026-03-06T13:42:19.897 INFO:teuthology.orchestra.run.vm03.stdout: python3-packaging noarch 20.9-5.el9 @appstream 248 k 2026-03-06T13:42:19.897 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan noarch 1.4.2-3.el9 @epel 1.3 M 2026-03-06T13:42:19.897 INFO:teuthology.orchestra.run.vm03.stdout: python3-ply noarch 3.11-14.el9 @baseos 430 k 2026-03-06T13:42:19.897 INFO:teuthology.orchestra.run.vm03.stdout: python3-portend noarch 3.1.0-2.el9 @epel 20 k 2026-03-06T13:42:19.897 INFO:teuthology.orchestra.run.vm03.stdout: python3-prettytable noarch 0.7.2-27.el9 @koji-override-0 166 k 2026-03-06T13:42:19.897 INFO:teuthology.orchestra.run.vm03.stdout: python3-protobuf noarch 3.14.0-17.el9 @appstream 1.4 M 2026-03-06T13:42:19.897 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 @epel 389 k 2026-03-06T13:42:19.897 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyasn1 noarch 0.4.8-7.el9 @appstream 622 k 2026-03-06T13:42:19.897 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 @appstream 1.0 M 2026-03-06T13:42:19.897 INFO:teuthology.orchestra.run.vm03.stdout: python3-pycparser noarch 2.20-6.el9 @baseos 745 k 2026-03-06T13:42:19.897 INFO:teuthology.orchestra.run.vm03.stdout: python3-pysocks noarch 1.7.1-12.el9 @anaconda 88 k 2026-03-06T13:42:19.897 INFO:teuthology.orchestra.run.vm03.stdout: python3-pytz noarch 2021.1-5.el9 @koji-override-0 176 k 2026-03-06T13:42:19.897 INFO:teuthology.orchestra.run.vm03.stdout: python3-repoze-lru noarch 0.7-16.el9 @epel 83 k 2026-03-06T13:42:19.897 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests noarch 2.25.1-10.el9 @baseos 405 k 2026-03-06T13:42:19.897 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 @appstream 119 k 2026-03-06T13:42:19.897 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes noarch 2.5.1-5.el9 @epel 459 k 2026-03-06T13:42:19.897 INFO:teuthology.orchestra.run.vm03.stdout: python3-rsa noarch 4.9-2.el9 @epel 202 k 2026-03-06T13:42:19.897 INFO:teuthology.orchestra.run.vm03.stdout: python3-saml noarch 1.16.0-1.el9 @epel 730 k 2026-03-06T13:42:19.897 INFO:teuthology.orchestra.run.vm03.stdout: python3-scipy x86_64 1.9.3-2.el9 @appstream 76 M 2026-03-06T13:42:19.897 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora noarch 5.0.0-2.el9 @epel 96 k 2026-03-06T13:42:19.898 INFO:teuthology.orchestra.run.vm03.stdout: python3-toml noarch 0.10.2-6.el9 @appstream 99 k 2026-03-06T13:42:19.898 INFO:teuthology.orchestra.run.vm03.stdout: python3-typing-extensions noarch 4.15.0-1.el9 @epel 447 k 2026-03-06T13:42:19.898 INFO:teuthology.orchestra.run.vm03.stdout: python3-urllib3 noarch 1.26.5-7.el9 @baseos 746 k 2026-03-06T13:42:19.898 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob noarch 1.8.8-2.el9 @epel 1.2 M 2026-03-06T13:42:19.898 INFO:teuthology.orchestra.run.vm03.stdout: python3-websocket-client noarch 1.2.3-2.el9 @epel 319 k 2026-03-06T13:42:19.898 INFO:teuthology.orchestra.run.vm03.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 @epel 1.9 M 2026-03-06T13:42:19.898 INFO:teuthology.orchestra.run.vm03.stdout: python3-xmlsec x86_64 1.3.13-1.el9 @epel 158 k 2026-03-06T13:42:19.898 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc-lockfile noarch 2.0-10.el9 @epel 35 k 2026-03-06T13:42:19.898 INFO:teuthology.orchestra.run.vm03.stdout: qatlib x86_64 25.08.0-2.el9 @appstream 639 k 2026-03-06T13:42:19.898 INFO:teuthology.orchestra.run.vm03.stdout: qatlib-service x86_64 25.08.0-2.el9 @appstream 69 k 2026-03-06T13:42:19.898 INFO:teuthology.orchestra.run.vm03.stdout: qatzip-libs x86_64 1.3.1-1.el9 @appstream 148 k 2026-03-06T13:42:19.898 INFO:teuthology.orchestra.run.vm03.stdout: smartmontools x86_64 1:7.2-10.el9 @baseos 1.9 M 2026-03-06T13:42:19.898 INFO:teuthology.orchestra.run.vm03.stdout: xmlsec1 x86_64 1.2.29-13.el9 @appstream 596 k 2026-03-06T13:42:19.898 INFO:teuthology.orchestra.run.vm03.stdout: xmlsec1-openssl x86_64 1.2.29-13.el9 @appstream 281 k 2026-03-06T13:42:19.898 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:42:19.898 INFO:teuthology.orchestra.run.vm03.stdout:Transaction Summary 2026-03-06T13:42:19.898 INFO:teuthology.orchestra.run.vm03.stdout:=================================================================================================== 2026-03-06T13:42:19.898 INFO:teuthology.orchestra.run.vm03.stdout:Remove 113 Packages 2026-03-06T13:42:19.898 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:42:19.898 INFO:teuthology.orchestra.run.vm03.stdout:Freed space: 623 M 2026-03-06T13:42:19.898 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction check 2026-03-06T13:42:19.931 INFO:teuthology.orchestra.run.vm03.stdout:Transaction check succeeded. 2026-03-06T13:42:19.931 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction test 2026-03-06T13:42:20.076 INFO:teuthology.orchestra.run.vm03.stdout:Transaction test succeeded. 2026-03-06T13:42:20.076 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction 2026-03-06T13:42:20.252 INFO:teuthology.orchestra.run.vm03.stdout: Preparing : 1/1 2026-03-06T13:42:20.253 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-mgr-rook-2:19.2.3-47.gc24117fd552.el9.clyso 1/113 2026-03-06T13:42:20.262 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-47.gc24117fd552.el9.clyso 1/113 2026-03-06T13:42:20.282 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mgr-2:19.2.3-47.gc24117fd552.el9.clyso.x86_ 2/113 2026-03-06T13:42:20.282 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-06T13:42:20.282 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-06T13:42:20.282 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mgr.target". 2026-03-06T13:42:20.282 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mgr.target". 2026-03-06T13:42:20.282 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:42:20.283 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-mgr-2:19.2.3-47.gc24117fd552.el9.clyso.x86_ 2/113 2026-03-06T13:42:20.299 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mgr-2:19.2.3-47.gc24117fd552.el9.clyso.x86_ 2/113 2026-03-06T13:42:20.321 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-mgr-modules-core-2:19.2.3-47.gc24117fd552.e 3/113 2026-03-06T13:42:20.321 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-mgr-dashboard-2:19.2.3-47.gc24117fd552.el9. 4/113 2026-03-06T13:42:20.336 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-47.gc24117fd552.el9. 4/113 2026-03-06T13:42:20.342 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-influxdb-5.3.1-1.el9.noarch 5/113 2026-03-06T13:42:20.342 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-mgr-cephadm-2:19.2.3-47.gc24117fd552.el9.cl 6/113 2026-03-06T13:42:20.359 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-47.gc24117fd552.el9.cl 6/113 2026-03-06T13:42:20.369 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-cherrypy-18.6.1-2.el9.noarch 7/113 2026-03-06T13:42:20.374 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-cheroot-10.0.1-4.el9.noarch 8/113 2026-03-06T13:42:20.385 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-grpcio-tools-1.46.7-10.el9.x86_64 9/113 2026-03-06T13:42:20.391 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-grpcio-1.46.7-10.el9.x86_64 10/113 2026-03-06T13:42:20.415 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-osd-2:19.2.3-47.gc24117fd552.el9.clyso.x86_ 11/113 2026-03-06T13:42:20.415 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-06T13:42:20.415 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-06T13:42:20.415 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-osd.target". 2026-03-06T13:42:20.415 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-osd.target". 2026-03-06T13:42:20.415 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:42:20.416 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-osd-2:19.2.3-47.gc24117fd552.el9.clyso.x86_ 11/113 2026-03-06T13:42:20.430 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-osd-2:19.2.3-47.gc24117fd552.el9.clyso.x86_ 11/113 2026-03-06T13:42:20.449 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-volume-2:19.2.3-47.gc24117fd552.el9.clyso.n 12/113 2026-03-06T13:42:20.449 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-06T13:42:20.449 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-06T13:42:20.449 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:42:20.458 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-volume-2:19.2.3-47.gc24117fd552.el9.clyso.n 12/113 2026-03-06T13:42:20.471 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-volume-2:19.2.3-47.gc24117fd552.el9.clyso.n 12/113 2026-03-06T13:42:20.474 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-jaraco-collections-3.0.0-8.el9.noarch 13/113 2026-03-06T13:42:20.480 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-jaraco-text-4.0.0-2.el9.noarch 14/113 2026-03-06T13:42:20.487 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-jinja2-2.11.3-8.el9.noarch 15/113 2026-03-06T13:42:20.500 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-saml-1.16.0-1.el9.noarch 16/113 2026-03-06T13:42:20.545 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-pecan-1.4.2-3.el9.noarch 17/113 2026-03-06T13:42:20.552 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-babel-2.9.1-2.el9.noarch 18/113 2026-03-06T13:42:20.556 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-jaraco-classes-3.2.1-5.el9.noarch 19/113 2026-03-06T13:42:20.566 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-pyOpenSSL-21.0.0-1.el9.noarch 20/113 2026-03-06T13:42:20.573 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-asyncssh-2.13.2-5.el9.noarch 21/113 2026-03-06T13:42:20.573 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-mgr-diskprediction-local-2:19.2.3-47.gc2411 22/113 2026-03-06T13:42:20.583 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-47.gc2411 22/113 2026-03-06T13:42:20.695 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-jsonpatch-1.21-16.el9.noarch 23/113 2026-03-06T13:42:20.711 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-scipy-1.9.3-2.el9.x86_64 24/113 2026-03-06T13:42:20.719 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-xmlsec-1.3.13-1.el9.x86_64 25/113 2026-03-06T13:42:20.724 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-lxml-4.6.5-3.el9.x86_64 26/113 2026-03-06T13:42:20.739 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 27/113 2026-03-06T13:42:20.740 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/multi-user.target.wants/libstoragemgmt.service". 2026-03-06T13:42:20.740 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:42:20.741 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libstoragemgmt-1.10.1-1.el9.x86_64 27/113 2026-03-06T13:42:20.778 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 27/113 2026-03-06T13:42:20.783 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 28/113 2026-03-06T13:42:20.786 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : xmlsec1-openssl-1.2.29-13.el9.x86_64 29/113 2026-03-06T13:42:20.800 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : xmlsec1-1.2.29-13.el9.x86_64 30/113 2026-03-06T13:42:20.806 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-cryptography-36.0.1-5.el9.x86_64 31/113 2026-03-06T13:42:20.809 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : protobuf-compiler-3.14.0-17.el9.x86_64 32/113 2026-03-06T13:42:20.812 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-bcrypt-3.2.2-1.el9.x86_64 33/113 2026-03-06T13:42:20.833 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: rbd-mirror-2:19.2.3-47.gc24117fd552.el9.clyso.x8 34/113 2026-03-06T13:42:20.833 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-06T13:42:20.833 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-06T13:42:20.833 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target". 2026-03-06T13:42:20.833 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target". 2026-03-06T13:42:20.833 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:42:20.833 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : rbd-mirror-2:19.2.3-47.gc24117fd552.el9.clyso.x8 34/113 2026-03-06T13:42:20.847 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: rbd-mirror-2:19.2.3-47.gc24117fd552.el9.clyso.x8 34/113 2026-03-06T13:42:20.851 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-mako-1.1.4-6.el9.noarch 35/113 2026-03-06T13:42:20.853 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-jaraco-context-6.0.1-3.el9.noarch 36/113 2026-03-06T13:42:20.856 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-portend-3.1.0-2.el9.noarch 37/113 2026-03-06T13:42:20.859 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-tempora-5.0.0-2.el9.noarch 38/113 2026-03-06T13:42:20.862 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-jaraco-functools-3.5.0-2.el9.noarch 39/113 2026-03-06T13:42:20.865 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-routes-2.5.1-5.el9.noarch 40/113 2026-03-06T13:42:20.865 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-mgr-k8sevents-2:19.2.3-47.gc24117fd552.el9. 41/113 2026-03-06T13:42:20.927 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mgr-k8sevents-2:19.2.3-47.gc24117fd552.el9. 41/113 2026-03-06T13:42:20.937 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-kubernetes-1:26.1.0-3.el9.noarch 42/113 2026-03-06T13:42:20.942 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-requests-oauthlib-1.3.0-12.el9.noarch 43/113 2026-03-06T13:42:20.952 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-requests-2.25.1-10.el9.noarch 44/113 2026-03-06T13:42:20.957 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-google-auth-1:2.45.0-1.el9.noarch 45/113 2026-03-06T13:42:20.969 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-rsa-4.9-2.el9.noarch 46/113 2026-03-06T13:42:20.976 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-pyasn1-modules-0.4.8-7.el9.noarch 47/113 2026-03-06T13:42:20.981 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-urllib3-1.26.5-7.el9.noarch 48/113 2026-03-06T13:42:20.987 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-cffi-1.14.5-5.el9.x86_64 49/113 2026-03-06T13:42:21.040 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-pycparser-2.20-6.el9.noarch 50/113 2026-03-06T13:42:21.051 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-numpy-1:1.23.5-2.el9.x86_64 51/113 2026-03-06T13:42:21.054 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : flexiblas-netlib-3.0.4-9.el9.x86_64 52/113 2026-03-06T13:42:21.055 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 53/113 2026-03-06T13:42:21.058 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : openblas-openmp-0.3.29-1.el9.x86_64 54/113 2026-03-06T13:42:21.061 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libgfortran-11.5.0-14.el9.x86_64 55/113 2026-03-06T13:42:21.064 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 56/113 2026-03-06T13:42:21.085 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-47.gc24117f 57/113 2026-03-06T13:42:21.085 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-06T13:42:21.085 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-06T13:42:21.085 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:42:21.086 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-immutable-object-cache-2:19.2.3-47.gc24117f 57/113 2026-03-06T13:42:21.096 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-47.gc24117f 57/113 2026-03-06T13:42:21.098 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : openblas-0.3.29-1.el9.x86_64 58/113 2026-03-06T13:42:21.101 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : flexiblas-3.0.4-9.el9.x86_64 59/113 2026-03-06T13:42:21.104 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-ply-3.11-14.el9.noarch 60/113 2026-03-06T13:42:21.108 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-idna-2.10-7.el9.1.noarch 61/113 2026-03-06T13:42:21.115 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-pysocks-1.7.1-12.el9.noarch 62/113 2026-03-06T13:42:21.120 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-pyasn1-0.4.8-7.el9.noarch 63/113 2026-03-06T13:42:21.128 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-cachetools-4.2.4-1.el9.noarch 64/113 2026-03-06T13:42:21.138 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-chardet-4.0.0-5.el9.noarch 65/113 2026-03-06T13:42:21.144 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-oauthlib-3.1.1-5.el9.noarch 66/113 2026-03-06T13:42:21.148 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-websocket-client-1.2.3-2.el9.noarch 67/113 2026-03-06T13:42:21.151 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-certifi-2023.05.07-4.el9.noarch 68/113 2026-03-06T13:42:21.154 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-repoze-lru-0.7-16.el9.noarch 69/113 2026-03-06T13:42:21.156 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-jaraco-8.2.1-3.el9.noarch 70/113 2026-03-06T13:42:21.159 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-more-itertools-8.12.0-2.el9.noarch 71/113 2026-03-06T13:42:21.162 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-toml-0.10.2-6.el9.noarch 72/113 2026-03-06T13:42:21.166 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-pytz-2021.1-5.el9.noarch 73/113 2026-03-06T13:42:21.175 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-backports-tarfile-1.2.0-1.el9.noarch 74/113 2026-03-06T13:42:21.180 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-devel-3.9.25-3.el9.x86_64 75/113 2026-03-06T13:42:21.183 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-jsonpointer-2.0-4.el9.noarch 76/113 2026-03-06T13:42:21.186 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-typing-extensions-4.15.0-1.el9.noarch 77/113 2026-03-06T13:42:21.192 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-logutils-0.3.5-21.el9.noarch 78/113 2026-03-06T13:42:21.198 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-webob-1.8.8-2.el9.noarch 79/113 2026-03-06T13:42:21.202 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-isodate-0.6.1-3.el9.noarch 80/113 2026-03-06T13:42:21.206 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-autocommand-2.2.2-8.el9.noarch 81/113 2026-03-06T13:42:21.209 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-packaging-20.9-5.el9.noarch 82/113 2026-03-06T13:42:21.215 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : grpc-data-1.46.7-10.el9.noarch 83/113 2026-03-06T13:42:21.220 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-protobuf-3.14.0-17.el9.noarch 84/113 2026-03-06T13:42:21.223 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-zc-lockfile-2.0-10.el9.noarch 85/113 2026-03-06T13:42:21.227 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-natsort-7.1.1-5.el9.noarch 86/113 2026-03-06T13:42:21.229 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-grafana-dashboards-2:19.2.3-47.gc24117fd552 87/113 2026-03-06T13:42:21.236 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-prometheus-alerts-2:19.2.3-47.gc24117fd552. 88/113 2026-03-06T13:42:21.240 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-werkzeug-2.0.3-3.el9.1.noarch 89/113 2026-03-06T13:42:21.263 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-base-2:19.2.3-47.gc24117fd552.el9.clyso.x86 90/113 2026-03-06T13:42:21.263 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-crash.service". 2026-03-06T13:42:21.263 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:42:21.272 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-base-2:19.2.3-47.gc24117fd552.el9.clyso.x86 90/113 2026-03-06T13:42:21.301 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-base-2:19.2.3-47.gc24117fd552.el9.clyso.x86 90/113 2026-03-06T13:42:21.301 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-common-2:19.2.3-47.gc24117fd552.el9.clyso.x 91/113 2026-03-06T13:42:21.316 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-common-2:19.2.3-47.gc24117fd552.el9.clyso.x 91/113 2026-03-06T13:42:21.321 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : qatzip-libs-1.3.1-1.el9.x86_64 92/113 2026-03-06T13:42:21.324 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-ceph-common-2:19.2.3-47.gc24117fd552.el9 93/113 2026-03-06T13:42:21.326 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-prettytable-0.7.2-27.el9.noarch 94/113 2026-03-06T13:42:21.326 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-selinux-2:19.2.3-47.gc24117fd552.el9.clyso. 95/113 2026-03-06T13:42:27.410 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-selinux-2:19.2.3-47.gc24117fd552.el9.clyso. 95/113 2026-03-06T13:42:27.410 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /sys 2026-03-06T13:42:27.410 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /proc 2026-03-06T13:42:27.410 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /mnt 2026-03-06T13:42:27.410 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /var/tmp 2026-03-06T13:42:27.410 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /home 2026-03-06T13:42:27.410 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /root 2026-03-06T13:42:27.410 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /tmp 2026-03-06T13:42:27.410 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:42:27.420 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : qatlib-25.08.0-2.el9.x86_64 96/113 2026-03-06T13:42:27.437 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 97/113 2026-03-06T13:42:27.437 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : qatlib-service-25.08.0-2.el9.x86_64 97/113 2026-03-06T13:42:27.444 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 97/113 2026-03-06T13:42:27.447 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : gperftools-libs-2.9.1-3.el9.x86_64 98/113 2026-03-06T13:42:27.449 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libunwind-1.6.2-1.el9.x86_64 99/113 2026-03-06T13:42:27.451 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : pciutils-3.7.0-7.el9.x86_64 100/113 2026-03-06T13:42:27.453 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : liboath-2.6.12-1.el9.x86_64 101/113 2026-03-06T13:42:27.453 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libradosstriper1-2:19.2.3-47.gc24117fd552.el9.cl 102/113 2026-03-06T13:42:27.465 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: libradosstriper1-2:19.2.3-47.gc24117fd552.el9.cl 102/113 2026-03-06T13:42:27.476 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: smartmontools-1:7.2-10.el9.x86_64 103/113 2026-03-06T13:42:27.476 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/multi-user.target.wants/smartd.service". 2026-03-06T13:42:27.476 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:42:27.478 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : smartmontools-1:7.2-10.el9.x86_64 103/113 2026-03-06T13:42:27.485 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: smartmontools-1:7.2-10.el9.x86_64 103/113 2026-03-06T13:42:27.488 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ledmon-libs-1.1.0-3.el9.x86_64 104/113 2026-03-06T13:42:27.490 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libquadmath-11.5.0-14.el9.x86_64 105/113 2026-03-06T13:42:27.493 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-markupsafe-1.1.1-12.el9.x86_64 106/113 2026-03-06T13:42:27.495 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : protobuf-3.14.0-17.el9.x86_64 107/113 2026-03-06T13:42:27.498 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libxslt-1.1.34-12.el9.x86_64 108/113 2026-03-06T13:42:27.504 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libconfig-1.7.2-9.el9.x86_64 109/113 2026-03-06T13:42:27.512 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : cryptsetup-2.8.1-3.el9.x86_64 110/113 2026-03-06T13:42:27.517 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : abseil-cpp-20211102.0-4.el9.x86_64 111/113 2026-03-06T13:42:27.520 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-msgpack-1.0.3-2.el9.x86_64 112/113 2026-03-06T13:42:27.520 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libcephsqlite-2:19.2.3-47.gc24117fd552.el9.clyso 113/113 2026-03-06T13:42:27.623 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: libcephsqlite-2:19.2.3-47.gc24117fd552.el9.clyso 113/113 2026-03-06T13:42:27.623 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 1/113 2026-03-06T13:42:27.623 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-base-2:19.2.3-47.gc24117fd552.el9.clyso.x86 2/113 2026-03-06T13:42:27.623 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-common-2:19.2.3-47.gc24117fd552.el9.clyso.x 3/113 2026-03-06T13:42:27.623 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-47.gc24117fd552 4/113 2026-03-06T13:42:27.623 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-47.gc24117f 5/113 2026-03-06T13:42:27.623 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-2:19.2.3-47.gc24117fd552.el9.clyso.x86_ 6/113 2026-03-06T13:42:27.623 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-47.gc24117fd552.el9.cl 7/113 2026-03-06T13:42:27.623 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-47.gc24117fd552.el9. 8/113 2026-03-06T13:42:27.623 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-47.gc2411 9/113 2026-03-06T13:42:27.623 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-k8sevents-2:19.2.3-47.gc24117fd552.el9. 10/113 2026-03-06T13:42:27.623 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-47.gc24117fd552.e 11/113 2026-03-06T13:42:27.623 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-rook-2:19.2.3-47.gc24117fd552.el9.clyso 12/113 2026-03-06T13:42:27.624 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-osd-2:19.2.3-47.gc24117fd552.el9.clyso.x86_ 13/113 2026-03-06T13:42:27.624 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-47.gc24117fd552. 14/113 2026-03-06T13:42:27.624 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-selinux-2:19.2.3-47.gc24117fd552.el9.clyso. 15/113 2026-03-06T13:42:27.624 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-volume-2:19.2.3-47.gc24117fd552.el9.clyso.n 16/113 2026-03-06T13:42:27.624 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 17/113 2026-03-06T13:42:27.624 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 18/113 2026-03-06T13:42:27.624 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 19/113 2026-03-06T13:42:27.624 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 20/113 2026-03-06T13:42:27.624 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 21/113 2026-03-06T13:42:27.624 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 22/113 2026-03-06T13:42:27.624 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 23/113 2026-03-06T13:42:27.624 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libcephsqlite-2:19.2.3-47.gc24117fd552.el9.clyso 24/113 2026-03-06T13:42:27.624 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 25/113 2026-03-06T13:42:27.624 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 26/113 2026-03-06T13:42:27.624 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 27/113 2026-03-06T13:42:27.624 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 28/113 2026-03-06T13:42:27.624 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libradosstriper1-2:19.2.3-47.gc24117fd552.el9.cl 29/113 2026-03-06T13:42:27.624 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 30/113 2026-03-06T13:42:27.624 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 31/113 2026-03-06T13:42:27.624 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 32/113 2026-03-06T13:42:27.624 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 33/113 2026-03-06T13:42:27.624 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 34/113 2026-03-06T13:42:27.624 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 35/113 2026-03-06T13:42:27.624 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 36/113 2026-03-06T13:42:27.624 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 37/113 2026-03-06T13:42:27.624 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 38/113 2026-03-06T13:42:27.624 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 39/113 2026-03-06T13:42:27.624 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 40/113 2026-03-06T13:42:27.624 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 41/113 2026-03-06T13:42:27.624 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 42/113 2026-03-06T13:42:27.624 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 43/113 2026-03-06T13:42:27.624 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-ceph-common-2:19.2.3-47.gc24117fd552.el9 44/113 2026-03-06T13:42:27.624 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 45/113 2026-03-06T13:42:27.624 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 46/113 2026-03-06T13:42:27.624 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-chardet-4.0.0-5.el9.noarch 47/113 2026-03-06T13:42:27.624 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 48/113 2026-03-06T13:42:27.624 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 49/113 2026-03-06T13:42:27.625 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 50/113 2026-03-06T13:42:27.625 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 51/113 2026-03-06T13:42:27.625 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 52/113 2026-03-06T13:42:27.625 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 53/113 2026-03-06T13:42:27.625 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 54/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-idna-2.10-7.el9.1.noarch 55/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-influxdb-5.3.1-1.el9.noarch 56/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-isodate-0.6.1-3.el9.noarch 57/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 58/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 59/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 60/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 61/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 62/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 63/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 64/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jsonpatch-1.21-16.el9.noarch 65/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jsonpointer-2.0-4.el9.noarch 66/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 67/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 68/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 69/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-lxml-4.6.5-3.el9.x86_64 70/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 71/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 72/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 73/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-msgpack-1.0.3-2.el9.x86_64 74/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 75/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 76/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 77/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-oauthlib-3.1.1-5.el9.noarch 78/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 79/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 80/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-ply-3.11-14.el9.noarch 81/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 82/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-prettytable-0.7.2-27.el9.noarch 83/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 84/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 85/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 86/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 87/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 88/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pysocks-1.7.1-12.el9.noarch 89/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pytz-2021.1-5.el9.noarch 90/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 91/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 92/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 93/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 94/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 95/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-saml-1.16.0-1.el9.noarch 96/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 97/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 98/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 99/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 100/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 101/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 102/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 103/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 104/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-xmlsec-1.3.13-1.el9.x86_64 105/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 106/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 107/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 108/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 109/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : rbd-mirror-2:19.2.3-47.gc24117fd552.el9.clyso.x8 110/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : smartmontools-1:7.2-10.el9.x86_64 111/113 2026-03-06T13:42:27.626 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : xmlsec1-1.2.29-13.el9.x86_64 112/113 2026-03-06T13:42:27.737 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : xmlsec1-openssl-1.2.29-13.el9.x86_64 113/113 2026-03-06T13:42:27.737 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout:Removed: 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: ceph-base-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: ceph-common-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: ceph-grafana-dashboards-2:19.2.3-47.gc24117fd552.el9.clyso.noarch 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: ceph-immutable-object-cache-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-cephadm-2:19.2.3-47.gc24117fd552.el9.clyso.noarch 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-dashboard-2:19.2.3-47.gc24117fd552.el9.clyso.noarch 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-diskprediction-local-2:19.2.3-47.gc24117fd552.el9.clyso.noarch 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-k8sevents-2:19.2.3-47.gc24117fd552.el9.clyso.noarch 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core-2:19.2.3-47.gc24117fd552.el9.clyso.noarch 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-rook-2:19.2.3-47.gc24117fd552.el9.clyso.noarch 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: ceph-osd-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: ceph-prometheus-alerts-2:19.2.3-47.gc24117fd552.el9.clyso.noarch 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: ceph-selinux-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: ceph-volume-2:19.2.3-47.gc24117fd552.el9.clyso.noarch 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: libcephsqlite-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: libxslt-1.1.34-12.el9.x86_64 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-common-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: python3-chardet-4.0.0-5.el9.noarch 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: python3-idna-2.10-7.el9.1.noarch 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: python3-influxdb-5.3.1-1.el9.noarch 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: python3-isodate-0.6.1-3.el9.noarch 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-06T13:42:27.738 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: python3-jsonpatch-1.21-16.el9.noarch 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: python3-jsonpointer-2.0-4.el9.noarch 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: python3-lxml-4.6.5-3.el9.x86_64 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: python3-msgpack-1.0.3-2.el9.x86_64 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: python3-oauthlib-3.1.1-5.el9.noarch 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: python3-ply-3.11-14.el9.noarch 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: python3-prettytable-0.7.2-27.el9.noarch 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: python3-pysocks-1.7.1-12.el9.noarch 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: python3-pytz-2021.1-5.el9.noarch 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: python3-saml-1.16.0-1.el9.noarch 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: python3-xmlsec-1.3.13-1.el9.x86_64 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: rbd-mirror-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: smartmontools-1:7.2-10.el9.x86_64 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: xmlsec1-1.2.29-13.el9.x86_64 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: xmlsec1-openssl-1.2.29-13.el9.x86_64 2026-03-06T13:42:27.739 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:42:27.740 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-06T13:42:27.981 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-06T13:42:27.981 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-06T13:42:27.981 INFO:teuthology.orchestra.run.vm03.stdout: Package Arch Version Repository Size 2026-03-06T13:42:27.981 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-06T13:42:27.981 INFO:teuthology.orchestra.run.vm03.stdout:Removing: 2026-03-06T13:42:27.981 INFO:teuthology.orchestra.run.vm03.stdout: cephadm noarch 2:19.2.3-47.gc24117fd552.el9.clyso @ceph-noarch 775 k 2026-03-06T13:42:27.981 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:42:27.981 INFO:teuthology.orchestra.run.vm03.stdout:Transaction Summary 2026-03-06T13:42:27.981 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-06T13:42:27.981 INFO:teuthology.orchestra.run.vm03.stdout:Remove 1 Package 2026-03-06T13:42:27.981 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:42:27.981 INFO:teuthology.orchestra.run.vm03.stdout:Freed space: 775 k 2026-03-06T13:42:27.981 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction check 2026-03-06T13:42:27.983 INFO:teuthology.orchestra.run.vm03.stdout:Transaction check succeeded. 2026-03-06T13:42:27.983 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction test 2026-03-06T13:42:27.985 INFO:teuthology.orchestra.run.vm03.stdout:Transaction test succeeded. 2026-03-06T13:42:27.985 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction 2026-03-06T13:42:28.003 INFO:teuthology.orchestra.run.vm03.stdout: Preparing : 1/1 2026-03-06T13:42:28.004 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : cephadm-2:19.2.3-47.gc24117fd552.el9.clyso.noarch 1/1 2026-03-06T13:42:28.128 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: cephadm-2:19.2.3-47.gc24117fd552.el9.clyso.noarch 1/1 2026-03-06T13:42:28.170 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : cephadm-2:19.2.3-47.gc24117fd552.el9.clyso.noarch 1/1 2026-03-06T13:42:28.170 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:42:28.170 INFO:teuthology.orchestra.run.vm03.stdout:Removed: 2026-03-06T13:42:28.170 INFO:teuthology.orchestra.run.vm03.stdout: cephadm-2:19.2.3-47.gc24117fd552.el9.clyso.noarch 2026-03-06T13:42:28.170 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:42:28.170 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-06T13:42:28.346 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: ceph-immutable-object-cache 2026-03-06T13:42:28.346 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-06T13:42:28.348 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-06T13:42:28.348 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-06T13:42:28.348 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-06T13:42:28.530 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: ceph-mgr 2026-03-06T13:42:28.530 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-06T13:42:28.533 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-06T13:42:28.533 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-06T13:42:28.533 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-06T13:42:28.729 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: ceph-mgr-dashboard 2026-03-06T13:42:28.729 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-06T13:42:28.732 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-06T13:42:28.733 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-06T13:42:28.733 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-06T13:42:28.943 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: ceph-mgr-diskprediction-local 2026-03-06T13:42:28.944 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-06T13:42:28.946 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-06T13:42:28.947 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-06T13:42:28.947 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-06T13:42:29.145 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: ceph-mgr-rook 2026-03-06T13:42:29.146 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-06T13:42:29.148 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-06T13:42:29.148 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-06T13:42:29.148 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-06T13:42:29.336 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: ceph-mgr-cephadm 2026-03-06T13:42:29.336 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-06T13:42:29.338 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-06T13:42:29.339 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-06T13:42:29.339 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-06T13:42:29.536 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-06T13:42:29.537 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-06T13:42:29.537 INFO:teuthology.orchestra.run.vm03.stdout: Package Arch Version Repository Size 2026-03-06T13:42:29.537 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-06T13:42:29.537 INFO:teuthology.orchestra.run.vm03.stdout:Removing: 2026-03-06T13:42:29.537 INFO:teuthology.orchestra.run.vm03.stdout: ceph-fuse x86_64 2:19.2.3-47.gc24117fd552.el9.clyso @ceph 3.6 M 2026-03-06T13:42:29.537 INFO:teuthology.orchestra.run.vm03.stdout:Removing unused dependencies: 2026-03-06T13:42:29.537 INFO:teuthology.orchestra.run.vm03.stdout: fuse x86_64 2.9.9-17.el9 @baseos 214 k 2026-03-06T13:42:29.537 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:42:29.537 INFO:teuthology.orchestra.run.vm03.stdout:Transaction Summary 2026-03-06T13:42:29.537 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-06T13:42:29.537 INFO:teuthology.orchestra.run.vm03.stdout:Remove 2 Packages 2026-03-06T13:42:29.537 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:42:29.537 INFO:teuthology.orchestra.run.vm03.stdout:Freed space: 3.8 M 2026-03-06T13:42:29.537 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction check 2026-03-06T13:42:29.539 INFO:teuthology.orchestra.run.vm03.stdout:Transaction check succeeded. 2026-03-06T13:42:29.539 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction test 2026-03-06T13:42:29.554 INFO:teuthology.orchestra.run.vm03.stdout:Transaction test succeeded. 2026-03-06T13:42:29.555 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction 2026-03-06T13:42:29.584 INFO:teuthology.orchestra.run.vm03.stdout: Preparing : 1/1 2026-03-06T13:42:29.588 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-fuse-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 1/2 2026-03-06T13:42:29.602 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : fuse-2.9.9-17.el9.x86_64 2/2 2026-03-06T13:42:29.683 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: fuse-2.9.9-17.el9.x86_64 2/2 2026-03-06T13:42:29.683 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-fuse-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 1/2 2026-03-06T13:42:29.731 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : fuse-2.9.9-17.el9.x86_64 2/2 2026-03-06T13:42:29.731 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:42:29.732 INFO:teuthology.orchestra.run.vm03.stdout:Removed: 2026-03-06T13:42:29.732 INFO:teuthology.orchestra.run.vm03.stdout: ceph-fuse-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 fuse-2.9.9-17.el9.x86_64 2026-03-06T13:42:29.732 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:42:29.732 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-06T13:42:29.966 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: ceph-volume 2026-03-06T13:42:29.966 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-06T13:42:29.966 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-06T13:42:29.967 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-06T13:42:29.967 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-06T13:42:30.208 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-06T13:42:30.208 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-06T13:42:30.208 INFO:teuthology.orchestra.run.vm03.stdout: Package Arch Version Repo Size 2026-03-06T13:42:30.208 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-06T13:42:30.208 INFO:teuthology.orchestra.run.vm03.stdout:Removing: 2026-03-06T13:42:30.208 INFO:teuthology.orchestra.run.vm03.stdout: librados-devel x86_64 2:19.2.3-47.gc24117fd552.el9.clyso @ceph 456 k 2026-03-06T13:42:30.208 INFO:teuthology.orchestra.run.vm03.stdout:Removing dependent packages: 2026-03-06T13:42:30.208 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs-devel x86_64 2:19.2.3-47.gc24117fd552.el9.clyso @ceph 153 k 2026-03-06T13:42:30.208 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:42:30.208 INFO:teuthology.orchestra.run.vm03.stdout:Transaction Summary 2026-03-06T13:42:30.208 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-06T13:42:30.208 INFO:teuthology.orchestra.run.vm03.stdout:Remove 2 Packages 2026-03-06T13:42:30.208 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:42:30.208 INFO:teuthology.orchestra.run.vm03.stdout:Freed space: 610 k 2026-03-06T13:42:30.208 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction check 2026-03-06T13:42:30.210 INFO:teuthology.orchestra.run.vm03.stdout:Transaction check succeeded. 2026-03-06T13:42:30.210 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction test 2026-03-06T13:42:30.221 INFO:teuthology.orchestra.run.vm03.stdout:Transaction test succeeded. 2026-03-06T13:42:30.221 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction 2026-03-06T13:42:30.249 INFO:teuthology.orchestra.run.vm03.stdout: Preparing : 1/1 2026-03-06T13:42:30.251 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libcephfs-devel-2:19.2.3-47.gc24117fd552.el9.clyso.x 1/2 2026-03-06T13:42:30.267 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : librados-devel-2:19.2.3-47.gc24117fd552.el9.clyso.x8 2/2 2026-03-06T13:42:30.357 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: librados-devel-2:19.2.3-47.gc24117fd552.el9.clyso.x8 2/2 2026-03-06T13:42:30.357 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libcephfs-devel-2:19.2.3-47.gc24117fd552.el9.clyso.x 1/2 2026-03-06T13:42:30.412 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librados-devel-2:19.2.3-47.gc24117fd552.el9.clyso.x8 2/2 2026-03-06T13:42:30.412 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:42:30.412 INFO:teuthology.orchestra.run.vm03.stdout:Removed: 2026-03-06T13:42:30.412 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs-devel-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:42:30.412 INFO:teuthology.orchestra.run.vm03.stdout: librados-devel-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:42:30.412 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:42:30.412 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-06T13:42:30.634 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-06T13:42:30.635 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-06T13:42:30.635 INFO:teuthology.orchestra.run.vm03.stdout: Package Arch Version Repo Size 2026-03-06T13:42:30.635 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-06T13:42:30.635 INFO:teuthology.orchestra.run.vm03.stdout:Removing: 2026-03-06T13:42:30.635 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs2 x86_64 2:19.2.3-47.gc24117fd552.el9.clyso @ceph 3.0 M 2026-03-06T13:42:30.635 INFO:teuthology.orchestra.run.vm03.stdout:Removing dependent packages: 2026-03-06T13:42:30.635 INFO:teuthology.orchestra.run.vm03.stdout: python3-cephfs x86_64 2:19.2.3-47.gc24117fd552.el9.clyso @ceph 514 k 2026-03-06T13:42:30.635 INFO:teuthology.orchestra.run.vm03.stdout:Removing unused dependencies: 2026-03-06T13:42:30.635 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-argparse x86_64 2:19.2.3-47.gc24117fd552.el9.clyso @ceph 187 k 2026-03-06T13:42:30.635 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:42:30.635 INFO:teuthology.orchestra.run.vm03.stdout:Transaction Summary 2026-03-06T13:42:30.635 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-06T13:42:30.635 INFO:teuthology.orchestra.run.vm03.stdout:Remove 3 Packages 2026-03-06T13:42:30.635 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:42:30.635 INFO:teuthology.orchestra.run.vm03.stdout:Freed space: 3.7 M 2026-03-06T13:42:30.635 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction check 2026-03-06T13:42:30.638 INFO:teuthology.orchestra.run.vm03.stdout:Transaction check succeeded. 2026-03-06T13:42:30.638 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction test 2026-03-06T13:42:30.657 INFO:teuthology.orchestra.run.vm03.stdout:Transaction test succeeded. 2026-03-06T13:42:30.657 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction 2026-03-06T13:42:30.700 INFO:teuthology.orchestra.run.vm03.stdout: Preparing : 1/1 2026-03-06T13:42:30.703 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-cephfs-2:19.2.3-47.gc24117fd552.el9.clyso.x8 1/3 2026-03-06T13:42:30.704 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-ceph-argparse-2:19.2.3-47.gc24117fd552.el9.c 2/3 2026-03-06T13:42:30.705 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libcephfs2-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 3/3 2026-03-06T13:42:30.779 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: libcephfs2-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 3/3 2026-03-06T13:42:30.779 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libcephfs2-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 1/3 2026-03-06T13:42:30.779 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-ceph-argparse-2:19.2.3-47.gc24117fd552.el9.c 2/3 2026-03-06T13:42:30.818 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-cephfs-2:19.2.3-47.gc24117fd552.el9.clyso.x8 3/3 2026-03-06T13:42:30.818 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:42:30.818 INFO:teuthology.orchestra.run.vm03.stdout:Removed: 2026-03-06T13:42:30.818 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs2-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:42:30.818 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-argparse-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:42:30.818 INFO:teuthology.orchestra.run.vm03.stdout: python3-cephfs-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:42:30.818 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:42:30.818 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-06T13:42:31.028 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: libcephfs-devel 2026-03-06T13:42:31.028 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-06T13:42:31.031 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-06T13:42:31.031 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-06T13:42:31.031 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-06T13:42:31.249 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-06T13:42:31.250 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-06T13:42:31.250 INFO:teuthology.orchestra.run.vm03.stdout: Package Arch Version Repository Size 2026-03-06T13:42:31.250 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-06T13:42:31.250 INFO:teuthology.orchestra.run.vm03.stdout:Removing: 2026-03-06T13:42:31.250 INFO:teuthology.orchestra.run.vm03.stdout: librados2 x86_64 2:19.2.3-47.gc24117fd552.el9.clyso @ceph 13 M 2026-03-06T13:42:31.250 INFO:teuthology.orchestra.run.vm03.stdout:Removing dependent packages: 2026-03-06T13:42:31.250 INFO:teuthology.orchestra.run.vm03.stdout: python3-rados x86_64 2:19.2.3-47.gc24117fd552.el9.clyso @ceph 1.1 M 2026-03-06T13:42:31.250 INFO:teuthology.orchestra.run.vm03.stdout: python3-rbd x86_64 2:19.2.3-47.gc24117fd552.el9.clyso @ceph 1.1 M 2026-03-06T13:42:31.250 INFO:teuthology.orchestra.run.vm03.stdout: python3-rgw x86_64 2:19.2.3-47.gc24117fd552.el9.clyso @ceph 265 k 2026-03-06T13:42:31.250 INFO:teuthology.orchestra.run.vm03.stdout: qemu-kvm-block-rbd x86_64 17:10.1.0-15.el9 @appstream 37 k 2026-03-06T13:42:31.250 INFO:teuthology.orchestra.run.vm03.stdout: rbd-fuse x86_64 2:19.2.3-47.gc24117fd552.el9.clyso @ceph 227 k 2026-03-06T13:42:31.250 INFO:teuthology.orchestra.run.vm03.stdout: rbd-nbd x86_64 2:19.2.3-47.gc24117fd552.el9.clyso @ceph 494 k 2026-03-06T13:42:31.250 INFO:teuthology.orchestra.run.vm03.stdout:Removing unused dependencies: 2026-03-06T13:42:31.250 INFO:teuthology.orchestra.run.vm03.stdout: boost-program-options 2026-03-06T13:42:31.250 INFO:teuthology.orchestra.run.vm03.stdout: x86_64 1.75.0-13.el9 @appstream 276 k 2026-03-06T13:42:31.250 INFO:teuthology.orchestra.run.vm03.stdout: libarrow x86_64 9.0.0-15.el9 @epel 18 M 2026-03-06T13:42:31.251 INFO:teuthology.orchestra.run.vm03.stdout: libarrow-doc noarch 9.0.0-15.el9 @epel 122 k 2026-03-06T13:42:31.251 INFO:teuthology.orchestra.run.vm03.stdout: libnbd x86_64 1.20.3-4.el9 @appstream 453 k 2026-03-06T13:42:31.251 INFO:teuthology.orchestra.run.vm03.stdout: libpmemobj x86_64 1.12.1-1.el9 @appstream 383 k 2026-03-06T13:42:31.251 INFO:teuthology.orchestra.run.vm03.stdout: librabbitmq x86_64 0.11.0-7.el9 @appstream 102 k 2026-03-06T13:42:31.251 INFO:teuthology.orchestra.run.vm03.stdout: librbd1 x86_64 2:19.2.3-47.gc24117fd552.el9.clyso @ceph 13 M 2026-03-06T13:42:31.251 INFO:teuthology.orchestra.run.vm03.stdout: librdkafka x86_64 1.6.1-102.el9 @appstream 2.0 M 2026-03-06T13:42:31.251 INFO:teuthology.orchestra.run.vm03.stdout: librgw2 x86_64 2:19.2.3-47.gc24117fd552.el9.clyso @ceph 19 M 2026-03-06T13:42:31.251 INFO:teuthology.orchestra.run.vm03.stdout: lttng-ust x86_64 2.12.0-6.el9 @appstream 1.0 M 2026-03-06T13:42:31.251 INFO:teuthology.orchestra.run.vm03.stdout: parquet-libs x86_64 9.0.0-15.el9 @epel 2.8 M 2026-03-06T13:42:31.251 INFO:teuthology.orchestra.run.vm03.stdout: re2 x86_64 1:20211101-20.el9 @epel 472 k 2026-03-06T13:42:31.251 INFO:teuthology.orchestra.run.vm03.stdout: thrift x86_64 0.15.0-4.el9 @epel 4.8 M 2026-03-06T13:42:31.251 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:42:31.251 INFO:teuthology.orchestra.run.vm03.stdout:Transaction Summary 2026-03-06T13:42:31.251 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-06T13:42:31.251 INFO:teuthology.orchestra.run.vm03.stdout:Remove 20 Packages 2026-03-06T13:42:31.251 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:42:31.251 INFO:teuthology.orchestra.run.vm03.stdout:Freed space: 79 M 2026-03-06T13:42:31.251 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction check 2026-03-06T13:42:31.255 INFO:teuthology.orchestra.run.vm03.stdout:Transaction check succeeded. 2026-03-06T13:42:31.255 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction test 2026-03-06T13:42:31.285 INFO:teuthology.orchestra.run.vm03.stdout:Transaction test succeeded. 2026-03-06T13:42:31.285 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction 2026-03-06T13:42:31.334 INFO:teuthology.orchestra.run.vm03.stdout: Preparing : 1/1 2026-03-06T13:42:31.337 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : rbd-nbd-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 1/20 2026-03-06T13:42:31.340 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : rbd-fuse-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2/20 2026-03-06T13:42:31.342 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-rgw-2:19.2.3-47.gc24117fd552.el9.clyso.x86 3/20 2026-03-06T13:42:31.342 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : librgw2-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 4/20 2026-03-06T13:42:31.357 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: librgw2-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 4/20 2026-03-06T13:42:31.359 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : parquet-libs-9.0.0-15.el9.x86_64 5/20 2026-03-06T13:42:31.361 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-rbd-2:19.2.3-47.gc24117fd552.el9.clyso.x86 6/20 2026-03-06T13:42:31.363 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-rados-2:19.2.3-47.gc24117fd552.el9.clyso.x 7/20 2026-03-06T13:42:31.366 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 8/20 2026-03-06T13:42:31.368 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libarrow-doc-9.0.0-15.el9.noarch 9/20 2026-03-06T13:42:31.368 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : librbd1-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 10/20 2026-03-06T13:42:31.384 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: librbd1-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 10/20 2026-03-06T13:42:31.384 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : librados2-2:19.2.3-47.gc24117fd552.el9.clyso.x86_6 11/20 2026-03-06T13:42:31.384 INFO:teuthology.orchestra.run.vm03.stdout:warning: file /etc/ceph: remove failed: No such file or directory 2026-03-06T13:42:31.384 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:42:31.398 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: librados2-2:19.2.3-47.gc24117fd552.el9.clyso.x86_6 11/20 2026-03-06T13:42:31.400 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libarrow-9.0.0-15.el9.x86_64 12/20 2026-03-06T13:42:31.403 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : re2-1:20211101-20.el9.x86_64 13/20 2026-03-06T13:42:31.407 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : lttng-ust-2.12.0-6.el9.x86_64 14/20 2026-03-06T13:42:31.410 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : thrift-0.15.0-4.el9.x86_64 15/20 2026-03-06T13:42:31.413 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libnbd-1.20.3-4.el9.x86_64 16/20 2026-03-06T13:42:31.416 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libpmemobj-1.12.1-1.el9.x86_64 17/20 2026-03-06T13:42:31.418 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : boost-program-options-1.75.0-13.el9.x86_64 18/20 2026-03-06T13:42:31.419 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : librabbitmq-0.11.0-7.el9.x86_64 19/20 2026-03-06T13:42:31.435 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-06T13:42:31.506 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-06T13:42:31.506 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 1/20 2026-03-06T13:42:31.506 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 2/20 2026-03-06T13:42:31.506 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 3/20 2026-03-06T13:42:31.506 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 4/20 2026-03-06T13:42:31.506 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 5/20 2026-03-06T13:42:31.506 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 6/20 2026-03-06T13:42:31.506 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librados2-2:19.2.3-47.gc24117fd552.el9.clyso.x86_6 7/20 2026-03-06T13:42:31.506 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librbd1-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 8/20 2026-03-06T13:42:31.506 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 9/20 2026-03-06T13:42:31.506 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librgw2-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 10/20 2026-03-06T13:42:31.506 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 11/20 2026-03-06T13:42:31.506 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 12/20 2026-03-06T13:42:31.506 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-rados-2:19.2.3-47.gc24117fd552.el9.clyso.x 13/20 2026-03-06T13:42:31.506 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-rbd-2:19.2.3-47.gc24117fd552.el9.clyso.x86 14/20 2026-03-06T13:42:31.506 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-rgw-2:19.2.3-47.gc24117fd552.el9.clyso.x86 15/20 2026-03-06T13:42:31.507 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 16/20 2026-03-06T13:42:31.507 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : rbd-fuse-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 17/20 2026-03-06T13:42:31.507 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : rbd-nbd-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 18/20 2026-03-06T13:42:31.507 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : re2-1:20211101-20.el9.x86_64 19/20 2026-03-06T13:42:31.559 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 20/20 2026-03-06T13:42:31.560 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:42:31.560 INFO:teuthology.orchestra.run.vm03.stdout:Removed: 2026-03-06T13:42:31.560 INFO:teuthology.orchestra.run.vm03.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-06T13:42:31.560 INFO:teuthology.orchestra.run.vm03.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-06T13:42:31.560 INFO:teuthology.orchestra.run.vm03.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-06T13:42:31.560 INFO:teuthology.orchestra.run.vm03.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-06T13:42:31.560 INFO:teuthology.orchestra.run.vm03.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-06T13:42:31.560 INFO:teuthology.orchestra.run.vm03.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-06T13:42:31.560 INFO:teuthology.orchestra.run.vm03.stdout: librados2-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:42:31.560 INFO:teuthology.orchestra.run.vm03.stdout: librbd1-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:42:31.560 INFO:teuthology.orchestra.run.vm03.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-06T13:42:31.560 INFO:teuthology.orchestra.run.vm03.stdout: librgw2-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:42:31.560 INFO:teuthology.orchestra.run.vm03.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-06T13:42:31.560 INFO:teuthology.orchestra.run.vm03.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-06T13:42:31.560 INFO:teuthology.orchestra.run.vm03.stdout: python3-rados-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:42:31.560 INFO:teuthology.orchestra.run.vm03.stdout: python3-rbd-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:42:31.560 INFO:teuthology.orchestra.run.vm03.stdout: python3-rgw-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:42:31.560 INFO:teuthology.orchestra.run.vm03.stdout: qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 2026-03-06T13:42:31.560 INFO:teuthology.orchestra.run.vm03.stdout: rbd-fuse-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:42:31.560 INFO:teuthology.orchestra.run.vm03.stdout: rbd-nbd-2:19.2.3-47.gc24117fd552.el9.clyso.x86_64 2026-03-06T13:42:31.560 INFO:teuthology.orchestra.run.vm03.stdout: re2-1:20211101-20.el9.x86_64 2026-03-06T13:42:31.560 INFO:teuthology.orchestra.run.vm03.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-06T13:42:31.560 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-06T13:42:31.560 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-06T13:42:31.792 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: librbd1 2026-03-06T13:42:31.792 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-06T13:42:31.794 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-06T13:42:31.795 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-06T13:42:31.795 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-06T13:42:31.984 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: python3-rados 2026-03-06T13:42:31.984 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-06T13:42:31.987 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-06T13:42:31.987 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-06T13:42:31.987 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-06T13:42:32.180 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: python3-rgw 2026-03-06T13:42:32.180 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-06T13:42:32.182 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-06T13:42:32.183 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-06T13:42:32.183 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-06T13:42:32.371 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: python3-cephfs 2026-03-06T13:42:32.371 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-06T13:42:32.374 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-06T13:42:32.374 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-06T13:42:32.374 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-06T13:42:32.552 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: python3-rbd 2026-03-06T13:42:32.552 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-06T13:42:32.554 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-06T13:42:32.555 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-06T13:42:32.555 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-06T13:42:32.735 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: rbd-fuse 2026-03-06T13:42:32.735 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-06T13:42:32.737 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-06T13:42:32.738 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-06T13:42:32.738 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-06T13:42:32.913 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: rbd-mirror 2026-03-06T13:42:32.913 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-06T13:42:32.915 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-06T13:42:32.916 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-06T13:42:32.916 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-06T13:42:33.105 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: rbd-nbd 2026-03-06T13:42:33.105 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-06T13:42:33.108 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-06T13:42:33.108 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-06T13:42:33.108 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-06T13:42:33.138 DEBUG:teuthology.orchestra.run.vm03:> sudo yum clean all 2026-03-06T13:42:33.269 INFO:teuthology.orchestra.run.vm03.stdout:56 files removed 2026-03-06T13:42:33.297 DEBUG:teuthology.orchestra.run.vm03:> sudo rm /etc/yum.repos.d/ceph-source.repo 2026-03-06T13:42:33.328 DEBUG:teuthology.orchestra.run.vm03:> sudo rm /etc/yum.repos.d/ceph-noarch.repo 2026-03-06T13:42:33.396 DEBUG:teuthology.orchestra.run.vm03:> sudo rm /etc/yum.repos.d/ceph.repo 2026-03-06T13:42:33.468 DEBUG:teuthology.orchestra.run.vm03:> sudo yum clean expire-cache 2026-03-06T13:42:33.633 INFO:teuthology.orchestra.run.vm03.stdout:Cache was expired 2026-03-06T13:42:33.634 INFO:teuthology.orchestra.run.vm03.stdout:0 files removed 2026-03-06T13:42:33.660 DEBUG:teuthology.parallel:result is None 2026-03-06T13:42:33.660 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm03.local 2026-03-06T13:42:33.660 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-06T13:42:33.690 DEBUG:teuthology.orchestra.run.vm03:> sudo mv -f /etc/yum/pluginconf.d/priorities.conf.orig /etc/yum/pluginconf.d/priorities.conf 2026-03-06T13:42:33.763 INFO:teuthology.orchestra.run.vm03.stderr:mv: cannot stat '/etc/yum/pluginconf.d/priorities.conf.orig': No such file or directory 2026-03-06T13:42:33.764 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-06T13:42:33.765 DEBUG:teuthology.parallel:result is None 2026-03-06T13:42:33.765 DEBUG:teuthology.run_tasks:Unwinding manager clock 2026-03-06T13:42:33.767 INFO:teuthology.task.clock:Checking final clock skew... 2026-03-06T13:42:33.767 DEBUG:teuthology.orchestra.run.vm03:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-06T13:42:33.824 INFO:teuthology.orchestra.run.vm03.stderr:bash: line 1: ntpq: command not found 2026-03-06T13:42:33.837 INFO:teuthology.orchestra.run.vm03.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-06T13:42:33.837 INFO:teuthology.orchestra.run.vm03.stdout:=============================================================================== 2026-03-06T13:42:33.837 INFO:teuthology.orchestra.run.vm03.stdout:^+ server1a.sim720.co.uk 2 6 377 24 +323us[ +243us] +/- 19ms 2026-03-06T13:42:33.837 INFO:teuthology.orchestra.run.vm03.stdout:^* 185.13.148.71 2 6 377 24 +105ns[ -80us] +/- 18ms 2026-03-06T13:42:33.837 INFO:teuthology.orchestra.run.vm03.stdout:^+ stratum2-4.NTP.TechFak.N> 2 6 377 23 +1333us[+1333us] +/- 18ms 2026-03-06T13:42:33.837 INFO:teuthology.orchestra.run.vm03.stdout:^+ ntp.kernfusion.at 2 6 375 24 -4089us[-4169us] +/- 29ms 2026-03-06T13:42:33.838 DEBUG:teuthology.run_tasks:Unwinding manager ansible.cephlab 2026-03-06T13:42:33.841 INFO:teuthology.task.ansible:Skipping ansible cleanup... 2026-03-06T13:42:33.841 DEBUG:teuthology.run_tasks:Unwinding manager selinux 2026-03-06T13:42:33.843 DEBUG:teuthology.run_tasks:Unwinding manager pcp 2026-03-06T13:42:33.845 DEBUG:teuthology.run_tasks:Unwinding manager internal.timer 2026-03-06T13:42:33.847 INFO:teuthology.task.internal:Duration was 718.451906 seconds 2026-03-06T13:42:33.847 DEBUG:teuthology.run_tasks:Unwinding manager internal.syslog 2026-03-06T13:42:33.849 INFO:teuthology.task.internal.syslog:Shutting down syslog monitoring... 2026-03-06T13:42:33.849 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-06T13:42:33.932 INFO:teuthology.orchestra.run.vm03.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-06T13:42:34.390 INFO:teuthology.task.internal.syslog:Checking logs for errors... 2026-03-06T13:42:34.390 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm03.local 2026-03-06T13:42:34.390 DEBUG:teuthology.orchestra.run.vm03:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-06T13:42:34.421 INFO:teuthology.task.internal.syslog:Gathering journactl... 2026-03-06T13:42:34.421 DEBUG:teuthology.orchestra.run.vm03:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-06T13:42:35.114 INFO:teuthology.task.internal.syslog:Compressing syslogs... 2026-03-06T13:42:35.114 DEBUG:teuthology.orchestra.run.vm03:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-06T13:42:35.141 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-06T13:42:35.141 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-06T13:42:35.142 INFO:teuthology.orchestra.run.vm03.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-06T13:42:35.142 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-06T13:42:35.143 INFO:teuthology.orchestra.run.vm03.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-06T13:42:35.336 INFO:teuthology.orchestra.run.vm03.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 97.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-06T13:42:35.338 DEBUG:teuthology.run_tasks:Unwinding manager internal.sudo 2026-03-06T13:42:35.341 INFO:teuthology.task.internal:Restoring /etc/sudoers... 2026-03-06T13:42:35.342 DEBUG:teuthology.orchestra.run.vm03:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-06T13:42:35.408 DEBUG:teuthology.run_tasks:Unwinding manager internal.coredump 2026-03-06T13:42:35.411 DEBUG:teuthology.orchestra.run.vm03:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-06T13:42:35.481 INFO:teuthology.orchestra.run.vm03.stdout:kernel.core_pattern = core 2026-03-06T13:42:35.498 DEBUG:teuthology.orchestra.run.vm03:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-06T13:42:35.556 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-06T13:42:35.556 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive 2026-03-06T13:42:35.560 INFO:teuthology.task.internal:Transferring archived files... 2026-03-06T13:42:35.560 DEBUG:teuthology.misc:Transferring archived files from vm03:/home/ubuntu/cephtest/archive to /archive/irq0-2026-03-06_13:20:18-orch:cephadm:workunits-cobaltcore-storage-v19.2.3-fasttrack-3-none-default-vps/271/remote/vm03 2026-03-06T13:42:35.560 DEBUG:teuthology.orchestra.run.vm03:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-06T13:42:35.636 INFO:teuthology.task.internal:Removing archive directory... 2026-03-06T13:42:35.636 DEBUG:teuthology.orchestra.run.vm03:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-06T13:42:35.692 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive_upload 2026-03-06T13:42:35.695 INFO:teuthology.task.internal:Not uploading archives. 2026-03-06T13:42:35.695 DEBUG:teuthology.run_tasks:Unwinding manager internal.base 2026-03-06T13:42:35.697 INFO:teuthology.task.internal:Tidying up after the test... 2026-03-06T13:42:35.697 DEBUG:teuthology.orchestra.run.vm03:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-06T13:42:35.754 INFO:teuthology.orchestra.run.vm03.stdout: 8532145 0 drwxr-xr-x 3 ubuntu ubuntu 19 Mar 6 13:42 /home/ubuntu/cephtest 2026-03-06T13:42:35.754 INFO:teuthology.orchestra.run.vm03.stdout: 21071361 0 drwxr-xr-x 3 ubuntu ubuntu 22 Mar 6 13:36 /home/ubuntu/cephtest/mnt.0 2026-03-06T13:42:35.754 INFO:teuthology.orchestra.run.vm03.stdout: 25234944 0 drwxr-xr-x 3 ubuntu ubuntu 17 Mar 6 13:36 /home/ubuntu/cephtest/mnt.0/client.0 2026-03-06T13:42:35.754 INFO:teuthology.orchestra.run.vm03.stdout: 67371479 0 drwxr-xr-x 2 ubuntu ubuntu 6 Mar 6 13:36 /home/ubuntu/cephtest/mnt.0/client.0/tmp 2026-03-06T13:42:35.755 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-06T13:42:35.755 INFO:teuthology.orchestra.run.vm03.stderr:rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty 2026-03-06T13:42:35.755 ERROR:teuthology.run_tasks:Manager failed: internal.base Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/teuthology/teuthology/task/internal/__init__.py", line 53, in base run.wait( File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 485, in wait proc.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm03 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest' 2026-03-06T13:42:35.755 DEBUG:teuthology.run_tasks:Unwinding manager console_log 2026-03-06T13:42:35.758 DEBUG:teuthology.run_tasks:Exception was not quenched, exiting: CommandFailedError: Command failed on vm03 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest' 2026-03-06T13:42:35.759 INFO:teuthology.run:Summary data: description: orch:cephadm:workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_iscsi_container/{centos_9.stream test_iscsi_container}} duration: 718.4519057273865 failure_reason: 'Command failed (workunit test cephadm/test_iscsi_pids_limit.sh) on vm03 with status 125: ''mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5726a36c3452e5b72190cfceba828abc62c819b7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_iscsi_pids_limit.sh''' flavor: default owner: irq0 sentry_event: null status: fail success: false 2026-03-06T13:42:35.759 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-06T13:42:35.781 INFO:teuthology.run:FAIL