2026-03-10T13:17:28.553 INFO:root:teuthology version: 1.2.4.dev6+g1c580df7a 2026-03-10T13:17:28.556 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-10T13:17:28.579 INFO:teuthology.run:Config: archive_path: /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1045 branch: squid description: orch/cephadm/no-agent-workunits/{0-distro/centos_9.stream mon_election/classic task/test_cephadm_timeout} email: null first_in_suite: false flavor: default job_id: '1045' last_in_suite: false machine_type: vps name: kyr-2026-03-10_01:00:38-orch-squid-none-default-vps no_nested_subset: false os_type: centos os_version: 9.stream overrides: admin_socket: branch: squid ansible.cephlab: branch: main skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: timezone: UTC ceph: conf: global: mon election default strategy: 1 mgr: debug mgr: 20 debug ms: 1 mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: debug ms: 1 debug osd: 20 osd mclock iops capacity threshold hdd: 49000 flavor: default log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) - CEPHADM_REFRESH_FAILED log-only-match: - CEPHADM_ sha1: e911bdebe5c8faa3800735d1568fcdca65db60df ceph-deploy: conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: {} install: ceph: flavor: default sha1: e911bdebe5c8faa3800735d1568fcdca65db60df extra_system_packages: deb: - python3-xmltodict - python3-jmespath rpm: - bzip2 - perl-Test-Harness - python3-xmltodict - python3-jmespath selinux: allowlist: - scontext=system_u:system_r:logrotate_t:s0 - scontext=system_u:system_r:getty_t:s0 workunit: branch: tt-squid sha1: 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b owner: kyr priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - host.a - mon.a - mgr.a - osd.0 - client.0 seed: 8043 sha1: e911bdebe5c8faa3800735d1568fcdca65db60df sleep_before_teardown: 0 subset: 1/64 suite: orch suite_branch: tt-squid suite_path: /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa suite_relpath: qa suite_repo: https://github.com/kshtsk/ceph.git suite_sha1: 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b targets: vm02.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB3yhs5G27u7eegIhlpI4fE/WaX3AhouLMbcCGHSIzy2MJk93jts6mUyUqdJAT2ZzgZL00u7VI/mkZf7gc1AglU= tasks: - pexec: all: - sudo dnf remove nvme-cli -y - sudo dnf install nvmetcli nvme-cli -y - install: null - cephadm: null - workunit: clients: client.0: - cephadm/test_cephadm_timeout.py teuthology: fragments_dropped: [] meta: {} postmerge: [] teuthology_branch: clyso-debian-13 teuthology_repo: https://github.com/clyso/teuthology teuthology_sha1: 1c580df7a9c7c2aadc272da296344fd99f27c444 timestamp: 2026-03-10_01:00:38 tube: vps user: kyr verbose: false worker_log: /home/teuthos/.teuthology/dispatcher/dispatcher.vps.611473 2026-03-10T13:17:28.579 INFO:teuthology.run:suite_path is set to /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa; will attempt to use it 2026-03-10T13:17:28.579 INFO:teuthology.run:Found tasks at /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks 2026-03-10T13:17:28.579 INFO:teuthology.run_tasks:Running task internal.check_packages... 2026-03-10T13:17:28.580 INFO:teuthology.task.internal:Checking packages... 2026-03-10T13:17:28.580 INFO:teuthology.task.internal:Checking packages for os_type 'centos', flavor 'default' and ceph hash 'e911bdebe5c8faa3800735d1568fcdca65db60df' 2026-03-10T13:17:28.580 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-10T13:17:28.580 INFO:teuthology.packaging:ref: None 2026-03-10T13:17:28.580 INFO:teuthology.packaging:tag: None 2026-03-10T13:17:28.580 INFO:teuthology.packaging:branch: squid 2026-03-10T13:17:28.580 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T13:17:28.580 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&ref=squid 2026-03-10T13:17:29.318 INFO:teuthology.task.internal:Found packages for ceph version 19.2.3-678.ge911bdeb 2026-03-10T13:17:29.319 INFO:teuthology.run_tasks:Running task internal.buildpackages_prep... 2026-03-10T13:17:29.320 INFO:teuthology.task.internal:no buildpackages task found 2026-03-10T13:17:29.320 INFO:teuthology.run_tasks:Running task internal.save_config... 2026-03-10T13:17:29.322 INFO:teuthology.task.internal:Saving configuration 2026-03-10T13:17:29.326 INFO:teuthology.run_tasks:Running task internal.check_lock... 2026-03-10T13:17:29.327 INFO:teuthology.task.internal.check_lock:Checking locks... 2026-03-10T13:17:29.334 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm02.local', 'description': '/archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1045', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'centos', 'os_version': '9.stream', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-10 13:16:34.245825', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:02', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB3yhs5G27u7eegIhlpI4fE/WaX3AhouLMbcCGHSIzy2MJk93jts6mUyUqdJAT2ZzgZL00u7VI/mkZf7gc1AglU='} 2026-03-10T13:17:29.334 INFO:teuthology.run_tasks:Running task internal.add_remotes... 2026-03-10T13:17:29.335 INFO:teuthology.task.internal:roles: ubuntu@vm02.local - ['host.a', 'mon.a', 'mgr.a', 'osd.0', 'client.0'] 2026-03-10T13:17:29.335 INFO:teuthology.run_tasks:Running task console_log... 2026-03-10T13:17:29.344 DEBUG:teuthology.task.console_log:vm02 does not support IPMI; excluding 2026-03-10T13:17:29.344 DEBUG:teuthology.exit:Installing handler: Handler(exiter=, func=.kill_console_loggers at 0x7f3ce63abd90>, signals=[15]) 2026-03-10T13:17:29.344 INFO:teuthology.run_tasks:Running task internal.connect... 2026-03-10T13:17:29.346 INFO:teuthology.task.internal:Opening connections... 2026-03-10T13:17:29.346 DEBUG:teuthology.task.internal:connecting to ubuntu@vm02.local 2026-03-10T13:17:29.347 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm02.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T13:17:29.406 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-03-10T13:17:29.407 DEBUG:teuthology.orchestra.run.vm02:> uname -m 2026-03-10T13:17:29.552 INFO:teuthology.orchestra.run.vm02.stdout:x86_64 2026-03-10T13:17:29.552 DEBUG:teuthology.orchestra.run.vm02:> cat /etc/os-release 2026-03-10T13:17:29.609 INFO:teuthology.orchestra.run.vm02.stdout:NAME="CentOS Stream" 2026-03-10T13:17:29.609 INFO:teuthology.orchestra.run.vm02.stdout:VERSION="9" 2026-03-10T13:17:29.609 INFO:teuthology.orchestra.run.vm02.stdout:ID="centos" 2026-03-10T13:17:29.609 INFO:teuthology.orchestra.run.vm02.stdout:ID_LIKE="rhel fedora" 2026-03-10T13:17:29.609 INFO:teuthology.orchestra.run.vm02.stdout:VERSION_ID="9" 2026-03-10T13:17:29.609 INFO:teuthology.orchestra.run.vm02.stdout:PLATFORM_ID="platform:el9" 2026-03-10T13:17:29.609 INFO:teuthology.orchestra.run.vm02.stdout:PRETTY_NAME="CentOS Stream 9" 2026-03-10T13:17:29.609 INFO:teuthology.orchestra.run.vm02.stdout:ANSI_COLOR="0;31" 2026-03-10T13:17:29.609 INFO:teuthology.orchestra.run.vm02.stdout:LOGO="fedora-logo-icon" 2026-03-10T13:17:29.609 INFO:teuthology.orchestra.run.vm02.stdout:CPE_NAME="cpe:/o:centos:centos:9" 2026-03-10T13:17:29.609 INFO:teuthology.orchestra.run.vm02.stdout:HOME_URL="https://centos.org/" 2026-03-10T13:17:29.609 INFO:teuthology.orchestra.run.vm02.stdout:BUG_REPORT_URL="https://issues.redhat.com/" 2026-03-10T13:17:29.609 INFO:teuthology.orchestra.run.vm02.stdout:REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9" 2026-03-10T13:17:29.609 INFO:teuthology.orchestra.run.vm02.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream" 2026-03-10T13:17:29.609 INFO:teuthology.lock.ops:Updating vm02.local on lock server 2026-03-10T13:17:29.615 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-03-10T13:17:29.617 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-03-10T13:17:29.618 INFO:teuthology.task.internal:Checking for old test directory... 2026-03-10T13:17:29.618 DEBUG:teuthology.orchestra.run.vm02:> test '!' -e /home/ubuntu/cephtest 2026-03-10T13:17:29.665 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-03-10T13:17:29.667 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-03-10T13:17:29.667 DEBUG:teuthology.orchestra.run.vm02:> test -z $(ls -A /var/lib/ceph) 2026-03-10T13:17:29.724 INFO:teuthology.orchestra.run.vm02.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-10T13:17:29.724 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-03-10T13:17:29.732 DEBUG:teuthology.orchestra.run.vm02:> test -e /ceph-qa-ready 2026-03-10T13:17:29.783 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T13:17:29.978 INFO:teuthology.run_tasks:Running task internal.base... 2026-03-10T13:17:29.979 INFO:teuthology.task.internal:Creating test directory... 2026-03-10T13:17:29.979 DEBUG:teuthology.orchestra.run.vm02:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-10T13:17:29.996 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-03-10T13:17:29.998 INFO:teuthology.run_tasks:Running task internal.archive... 2026-03-10T13:17:29.999 INFO:teuthology.task.internal:Creating archive directory... 2026-03-10T13:17:29.999 DEBUG:teuthology.orchestra.run.vm02:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-10T13:17:30.054 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-03-10T13:17:30.056 INFO:teuthology.task.internal:Enabling coredump saving... 2026-03-10T13:17:30.056 DEBUG:teuthology.orchestra.run.vm02:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-10T13:17:30.109 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T13:17:30.109 DEBUG:teuthology.orchestra.run.vm02:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-10T13:17:30.180 INFO:teuthology.orchestra.run.vm02.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T13:17:30.192 INFO:teuthology.orchestra.run.vm02.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T13:17:30.193 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-03-10T13:17:30.195 INFO:teuthology.task.internal:Configuring sudo... 2026-03-10T13:17:30.195 DEBUG:teuthology.orchestra.run.vm02:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-10T13:17:30.261 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-03-10T13:17:30.263 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-03-10T13:17:30.263 DEBUG:teuthology.orchestra.run.vm02:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-10T13:17:30.318 DEBUG:teuthology.orchestra.run.vm02:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T13:17:30.382 DEBUG:teuthology.orchestra.run.vm02:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T13:17:30.445 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T13:17:30.445 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-10T13:17:30.503 DEBUG:teuthology.orchestra.run.vm02:> sudo service rsyslog restart 2026-03-10T13:17:30.574 INFO:teuthology.orchestra.run.vm02.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T13:17:31.031 INFO:teuthology.run_tasks:Running task internal.timer... 2026-03-10T13:17:31.034 INFO:teuthology.task.internal:Starting timer... 2026-03-10T13:17:31.035 INFO:teuthology.run_tasks:Running task pcp... 2026-03-10T13:17:31.037 INFO:teuthology.run_tasks:Running task selinux... 2026-03-10T13:17:31.040 DEBUG:teuthology.task:Applying overrides for task selinux: {'allowlist': ['scontext=system_u:system_r:logrotate_t:s0', 'scontext=system_u:system_r:getty_t:s0']} 2026-03-10T13:17:31.040 INFO:teuthology.task.selinux:Excluding vm02: VMs are not yet supported 2026-03-10T13:17:31.040 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-03-10T13:17:31.040 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-03-10T13:17:31.040 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-03-10T13:17:31.040 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-03-10T13:17:31.042 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'timezone': 'UTC'}} 2026-03-10T13:17:31.042 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main to origin/main 2026-03-10T13:17:31.048 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-03-10T13:17:31.048 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "timezone": "UTC"}' -i /tmp/teuth_ansible_inventory40f9o_5l --limit vm02.local /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-03-10T13:19:44.392 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm02.local')] 2026-03-10T13:19:44.393 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm02.local' 2026-03-10T13:19:44.393 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm02.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T13:19:44.455 DEBUG:teuthology.orchestra.run.vm02:> true 2026-03-10T13:19:44.535 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm02.local' 2026-03-10T13:19:44.535 INFO:teuthology.run_tasks:Running task clock... 2026-03-10T13:19:44.537 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-03-10T13:19:44.537 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-10T13:19:44.538 DEBUG:teuthology.orchestra.run.vm02:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T13:19:44.617 INFO:teuthology.orchestra.run.vm02.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-03-10T13:19:44.639 INFO:teuthology.orchestra.run.vm02.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-03-10T13:19:44.675 INFO:teuthology.orchestra.run.vm02.stderr:sudo: ntpd: command not found 2026-03-10T13:19:44.689 INFO:teuthology.orchestra.run.vm02.stdout:506 Cannot talk to daemon 2026-03-10T13:19:44.706 INFO:teuthology.orchestra.run.vm02.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-03-10T13:19:44.722 INFO:teuthology.orchestra.run.vm02.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-03-10T13:19:44.771 INFO:teuthology.orchestra.run.vm02.stderr:bash: line 1: ntpq: command not found 2026-03-10T13:19:44.775 INFO:teuthology.orchestra.run.vm02.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T13:19:44.775 INFO:teuthology.orchestra.run.vm02.stdout:=============================================================================== 2026-03-10T13:19:44.775 INFO:teuthology.orchestra.run.vm02.stdout:^? www.h4x-gamers.top 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T13:19:44.775 INFO:teuthology.orchestra.run.vm02.stdout:^? node-4.infogral.is 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T13:19:44.775 INFO:teuthology.orchestra.run.vm02.stdout:^? ntp1.uni-ulm.de 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T13:19:44.775 INFO:teuthology.orchestra.run.vm02.stdout:^? node-3.infogral.is 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T13:19:44.775 INFO:teuthology.run_tasks:Running task pexec... 2026-03-10T13:19:44.778 INFO:teuthology.task.pexec:Executing custom commands... 2026-03-10T13:19:44.778 DEBUG:teuthology.orchestra.run.vm02:> TESTDIR=/home/ubuntu/cephtest bash -s 2026-03-10T13:19:44.818 DEBUG:teuthology.task.pexec:ubuntu@vm02.local< sudo dnf remove nvme-cli -y 2026-03-10T13:19:44.818 DEBUG:teuthology.task.pexec:ubuntu@vm02.local< sudo dnf install nvmetcli nvme-cli -y 2026-03-10T13:19:44.818 INFO:teuthology.task.pexec:Running commands on host ubuntu@vm02.local 2026-03-10T13:19:44.818 INFO:teuthology.task.pexec:sudo dnf remove nvme-cli -y 2026-03-10T13:19:44.818 INFO:teuthology.task.pexec:sudo dnf install nvmetcli nvme-cli -y 2026-03-10T13:19:45.069 INFO:teuthology.orchestra.run.vm02.stdout:No match for argument: nvme-cli 2026-03-10T13:19:45.069 INFO:teuthology.orchestra.run.vm02.stderr:No packages marked for removal. 2026-03-10T13:19:45.075 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T13:19:45.076 INFO:teuthology.orchestra.run.vm02.stdout:Nothing to do. 2026-03-10T13:19:45.076 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T13:19:45.501 INFO:teuthology.orchestra.run.vm02.stdout:Last metadata expiration check: 0:01:20 ago on Tue 10 Mar 2026 01:18:25 PM UTC. 2026-03-10T13:19:45.616 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T13:19:45.617 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T13:19:45.617 INFO:teuthology.orchestra.run.vm02.stdout: Package Architecture Version Repository Size 2026-03-10T13:19:45.617 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T13:19:45.617 INFO:teuthology.orchestra.run.vm02.stdout:Installing: 2026-03-10T13:19:45.617 INFO:teuthology.orchestra.run.vm02.stdout: nvme-cli x86_64 2.16-1.el9 baseos 1.2 M 2026-03-10T13:19:45.617 INFO:teuthology.orchestra.run.vm02.stdout: nvmetcli noarch 0.8-3.el9 baseos 44 k 2026-03-10T13:19:45.617 INFO:teuthology.orchestra.run.vm02.stdout:Installing dependencies: 2026-03-10T13:19:45.617 INFO:teuthology.orchestra.run.vm02.stdout: python3-configshell noarch 1:1.1.30-1.el9 baseos 72 k 2026-03-10T13:19:45.617 INFO:teuthology.orchestra.run.vm02.stdout: python3-kmod x86_64 0.9-32.el9 baseos 84 k 2026-03-10T13:19:45.617 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyparsing noarch 2.4.7-9.el9 baseos 150 k 2026-03-10T13:19:45.617 INFO:teuthology.orchestra.run.vm02.stdout: python3-urwid x86_64 2.1.2-4.el9 baseos 837 k 2026-03-10T13:19:45.617 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:19:45.617 INFO:teuthology.orchestra.run.vm02.stdout:Transaction Summary 2026-03-10T13:19:45.617 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T13:19:45.617 INFO:teuthology.orchestra.run.vm02.stdout:Install 6 Packages 2026-03-10T13:19:45.617 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:19:45.617 INFO:teuthology.orchestra.run.vm02.stdout:Total download size: 2.3 M 2026-03-10T13:19:45.617 INFO:teuthology.orchestra.run.vm02.stdout:Installed size: 11 M 2026-03-10T13:19:45.617 INFO:teuthology.orchestra.run.vm02.stdout:Downloading Packages: 2026-03-10T13:19:45.979 INFO:teuthology.orchestra.run.vm02.stdout:(1/6): nvmetcli-0.8-3.el9.noarch.rpm 194 kB/s | 44 kB 00:00 2026-03-10T13:19:45.980 INFO:teuthology.orchestra.run.vm02.stdout:(2/6): python3-configshell-1.1.30-1.el9.noarch. 317 kB/s | 72 kB 00:00 2026-03-10T13:19:46.092 INFO:teuthology.orchestra.run.vm02.stdout:(3/6): python3-kmod-0.9-32.el9.x86_64.rpm 750 kB/s | 84 kB 00:00 2026-03-10T13:19:46.093 INFO:teuthology.orchestra.run.vm02.stdout:(4/6): python3-pyparsing-2.4.7-9.el9.noarch.rpm 1.3 MB/s | 150 kB 00:00 2026-03-10T13:19:46.205 INFO:teuthology.orchestra.run.vm02.stdout:(5/6): nvme-cli-2.16-1.el9.x86_64.rpm 2.5 MB/s | 1.2 MB 00:00 2026-03-10T13:19:46.266 INFO:teuthology.orchestra.run.vm02.stdout:(6/6): python3-urwid-2.1.2-4.el9.x86_64.rpm 4.7 MB/s | 837 kB 00:00 2026-03-10T13:19:46.267 INFO:teuthology.orchestra.run.vm02.stdout:-------------------------------------------------------------------------------- 2026-03-10T13:19:46.267 INFO:teuthology.orchestra.run.vm02.stdout:Total 3.6 MB/s | 2.3 MB 00:00 2026-03-10T13:19:46.336 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction check 2026-03-10T13:19:46.343 INFO:teuthology.orchestra.run.vm02.stdout:Transaction check succeeded. 2026-03-10T13:19:46.343 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction test 2026-03-10T13:19:46.406 INFO:teuthology.orchestra.run.vm02.stdout:Transaction test succeeded. 2026-03-10T13:19:46.407 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction 2026-03-10T13:19:46.596 INFO:teuthology.orchestra.run.vm02.stdout: Preparing : 1/1 2026-03-10T13:19:46.609 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-urwid-2.1.2-4.el9.x86_64 1/6 2026-03-10T13:19:46.626 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-pyparsing-2.4.7-9.el9.noarch 2/6 2026-03-10T13:19:46.636 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-configshell-1:1.1.30-1.el9.noarch 3/6 2026-03-10T13:19:46.647 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-kmod-0.9-32.el9.x86_64 4/6 2026-03-10T13:19:46.651 INFO:teuthology.orchestra.run.vm02.stdout: Installing : nvmetcli-0.8-3.el9.noarch 5/6 2026-03-10T13:19:46.840 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: nvmetcli-0.8-3.el9.noarch 5/6 2026-03-10T13:19:46.846 INFO:teuthology.orchestra.run.vm02.stdout: Installing : nvme-cli-2.16-1.el9.x86_64 6/6 2026-03-10T13:19:47.257 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: nvme-cli-2.16-1.el9.x86_64 6/6 2026-03-10T13:19:47.257 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /usr/lib/systemd/system/nvmefc-boot-connections.service. 2026-03-10T13:19:47.257 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:19:47.888 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : nvme-cli-2.16-1.el9.x86_64 1/6 2026-03-10T13:19:47.888 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : nvmetcli-0.8-3.el9.noarch 2/6 2026-03-10T13:19:47.888 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-configshell-1:1.1.30-1.el9.noarch 3/6 2026-03-10T13:19:47.888 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-kmod-0.9-32.el9.x86_64 4/6 2026-03-10T13:19:47.888 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.noarch 5/6 2026-03-10T13:19:47.995 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-urwid-2.1.2-4.el9.x86_64 6/6 2026-03-10T13:19:47.995 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:19:47.995 INFO:teuthology.orchestra.run.vm02.stdout:Installed: 2026-03-10T13:19:47.995 INFO:teuthology.orchestra.run.vm02.stdout: nvme-cli-2.16-1.el9.x86_64 nvmetcli-0.8-3.el9.noarch 2026-03-10T13:19:47.995 INFO:teuthology.orchestra.run.vm02.stdout: python3-configshell-1:1.1.30-1.el9.noarch python3-kmod-0.9-32.el9.x86_64 2026-03-10T13:19:47.995 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyparsing-2.4.7-9.el9.noarch python3-urwid-2.1.2-4.el9.x86_64 2026-03-10T13:19:47.995 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:19:47.995 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T13:19:48.061 DEBUG:teuthology.parallel:result is None 2026-03-10T13:19:48.061 INFO:teuthology.run_tasks:Running task install... 2026-03-10T13:19:48.062 DEBUG:teuthology.task.install:project ceph 2026-03-10T13:19:48.063 DEBUG:teuthology.task.install:INSTALL overrides: {'ceph': {'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'}, 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-10T13:19:48.063 DEBUG:teuthology.task.install:config {'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-10T13:19:48.063 INFO:teuthology.task.install:Using flavor: default 2026-03-10T13:19:48.065 DEBUG:teuthology.task.install:Package list is: {'deb': ['ceph', 'cephadm', 'ceph-mds', 'ceph-mgr', 'ceph-common', 'ceph-fuse', 'ceph-test', 'ceph-volume', 'radosgw', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'libcephfs2', 'libcephfs-dev', 'librados2', 'librbd1', 'rbd-fuse'], 'rpm': ['ceph-radosgw', 'ceph-test', 'ceph', 'ceph-base', 'cephadm', 'ceph-immutable-object-cache', 'ceph-mgr', 'ceph-mgr-dashboard', 'ceph-mgr-diskprediction-local', 'ceph-mgr-rook', 'ceph-mgr-cephadm', 'ceph-fuse', 'ceph-volume', 'librados-devel', 'libcephfs2', 'libcephfs-devel', 'librados2', 'librbd1', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'rbd-fuse', 'rbd-mirror', 'rbd-nbd']} 2026-03-10T13:19:48.065 INFO:teuthology.task.install:extra packages: [] 2026-03-10T13:19:48.065 DEBUG:teuthology.task.install.rpm:_update_package_list_and_install: config is {'branch': None, 'cleanup': None, 'debuginfo': None, 'downgrade_packages': [], 'exclude_packages': [], 'extra_packages': [], 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}, 'extras': None, 'enable_coprs': [], 'flavor': 'default', 'install_ceph_packages': True, 'packages': {}, 'project': 'ceph', 'repos_only': False, 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'tag': None, 'wait_for_package': False} 2026-03-10T13:19:48.065 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T13:19:48.702 INFO:teuthology.task.install.rpm:Pulling from https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/ 2026-03-10T13:19:48.702 INFO:teuthology.task.install.rpm:Package version is 19.2.3-678.ge911bdeb 2026-03-10T13:19:49.213 INFO:teuthology.packaging:Writing yum repo: [ceph] name=ceph packages for $basearch baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/$basearch enabled=1 gpgcheck=0 type=rpm-md [ceph-noarch] name=ceph noarch packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/noarch enabled=1 gpgcheck=0 type=rpm-md [ceph-source] name=ceph source packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/SRPMS enabled=1 gpgcheck=0 type=rpm-md 2026-03-10T13:19:49.213 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T13:19:49.213 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/etc/yum.repos.d/ceph.repo 2026-03-10T13:19:49.254 INFO:teuthology.task.install.rpm:Installing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd, bzip2, perl-Test-Harness, python3-xmltodict, python3-jmespath on remote rpm x86_64 2026-03-10T13:19:49.255 DEBUG:teuthology.orchestra.run.vm02:> if test -f /etc/yum.repos.d/ceph.repo ; then sudo sed -i -e ':a;N;$!ba;s/enabled=1\ngpg/enabled=1\npriority=1\ngpg/g' -e 's;ref/[a-zA-Z0-9_-]*/;sha1/e911bdebe5c8faa3800735d1568fcdca65db60df/;g' /etc/yum.repos.d/ceph.repo ; fi 2026-03-10T13:19:49.330 DEBUG:teuthology.orchestra.run.vm02:> sudo touch -a /etc/yum/pluginconf.d/priorities.conf ; test -e /etc/yum/pluginconf.d/priorities.conf.orig || sudo cp -af /etc/yum/pluginconf.d/priorities.conf /etc/yum/pluginconf.d/priorities.conf.orig 2026-03-10T13:19:49.430 DEBUG:teuthology.orchestra.run.vm02:> grep check_obsoletes /etc/yum/pluginconf.d/priorities.conf && sudo sed -i 's/check_obsoletes.*0/check_obsoletes = 1/g' /etc/yum/pluginconf.d/priorities.conf || echo 'check_obsoletes = 1' | sudo tee -a /etc/yum/pluginconf.d/priorities.conf 2026-03-10T13:19:49.469 INFO:teuthology.orchestra.run.vm02.stdout:check_obsoletes = 1 2026-03-10T13:19:49.470 DEBUG:teuthology.orchestra.run.vm02:> sudo yum clean all 2026-03-10T13:19:49.680 INFO:teuthology.orchestra.run.vm02.stdout:41 files removed 2026-03-10T13:19:49.711 DEBUG:teuthology.orchestra.run.vm02:> sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd bzip2 perl-Test-Harness python3-xmltodict python3-jmespath 2026-03-10T13:19:51.173 INFO:teuthology.orchestra.run.vm02.stdout:ceph packages for x86_64 66 kB/s | 84 kB 00:01 2026-03-10T13:19:52.160 INFO:teuthology.orchestra.run.vm02.stdout:ceph noarch packages 12 kB/s | 12 kB 00:00 2026-03-10T13:19:53.154 INFO:teuthology.orchestra.run.vm02.stdout:ceph source packages 1.9 kB/s | 1.9 kB 00:00 2026-03-10T13:19:54.383 INFO:teuthology.orchestra.run.vm02.stdout:CentOS Stream 9 - BaseOS 7.4 MB/s | 8.9 MB 00:01 2026-03-10T13:19:56.606 INFO:teuthology.orchestra.run.vm02.stdout:CentOS Stream 9 - AppStream 19 MB/s | 27 MB 00:01 2026-03-10T13:20:01.889 INFO:teuthology.orchestra.run.vm02.stdout:CentOS Stream 9 - CRB 3.6 MB/s | 8.0 MB 00:02 2026-03-10T13:20:03.360 INFO:teuthology.orchestra.run.vm02.stdout:CentOS Stream 9 - Extras packages 34 kB/s | 20 kB 00:00 2026-03-10T13:20:04.246 INFO:teuthology.orchestra.run.vm02.stdout:Extra Packages for Enterprise Linux 25 MB/s | 20 MB 00:00 2026-03-10T13:20:09.127 INFO:teuthology.orchestra.run.vm02.stdout:lab-extras 63 kB/s | 50 kB 00:00 2026-03-10T13:20:10.567 INFO:teuthology.orchestra.run.vm02.stdout:Package librados2-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-10T13:20:10.567 INFO:teuthology.orchestra.run.vm02.stdout:Package librbd1-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-10T13:20:10.571 INFO:teuthology.orchestra.run.vm02.stdout:Package bzip2-1.0.8-11.el9.x86_64 is already installed. 2026-03-10T13:20:10.572 INFO:teuthology.orchestra.run.vm02.stdout:Package perl-Test-Harness-1:3.42-461.el9.noarch is already installed. 2026-03-10T13:20:10.602 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T13:20:10.606 INFO:teuthology.orchestra.run.vm02.stdout:====================================================================================== 2026-03-10T13:20:10.606 INFO:teuthology.orchestra.run.vm02.stdout: Package Arch Version Repository Size 2026-03-10T13:20:10.606 INFO:teuthology.orchestra.run.vm02.stdout:====================================================================================== 2026-03-10T13:20:10.606 INFO:teuthology.orchestra.run.vm02.stdout:Installing: 2026-03-10T13:20:10.606 INFO:teuthology.orchestra.run.vm02.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 6.5 k 2026-03-10T13:20:10.606 INFO:teuthology.orchestra.run.vm02.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.5 M 2026-03-10T13:20:10.606 INFO:teuthology.orchestra.run.vm02.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.2 M 2026-03-10T13:20:10.606 INFO:teuthology.orchestra.run.vm02.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 145 k 2026-03-10T13:20:10.606 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.1 M 2026-03-10T13:20:10.606 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 150 k 2026-03-10T13:20:10.606 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 3.8 M 2026-03-10T13:20:10.606 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 7.4 M 2026-03-10T13:20:10.606 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 49 k 2026-03-10T13:20:10.606 INFO:teuthology.orchestra.run.vm02.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 11 M 2026-03-10T13:20:10.606 INFO:teuthology.orchestra.run.vm02.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 50 M 2026-03-10T13:20:10.606 INFO:teuthology.orchestra.run.vm02.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 299 k 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 769 k 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 34 k 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.0 M 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 127 k 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 165 k 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: python3-jmespath noarch 1.0.1-1.el9 appstream 48 k 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 323 k 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 303 k 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 100 k 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: python3-xmltodict noarch 0.12.0-15.el9 epel 22 k 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 85 k 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.1 M 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 171 k 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout:Upgrading: 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.4 M 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.2 M 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout:Installing dependencies: 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: abseil-cpp x86_64 20211102.0-4.el9 epel 551 k 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: boost-program-options x86_64 1.75.0-13.el9 appstream 104 k 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 22 M 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 31 k 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 2.4 M 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 253 k 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 4.7 M 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 17 M 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 17 k 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 25 k 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: cryptsetup x86_64 2.8.1-3.el9 baseos 351 k 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: flexiblas x86_64 3.0.4-9.el9 appstream 30 k 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 appstream 3.0 M 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 appstream 15 k 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: gperftools-libs x86_64 2.9.1-3.el9 epel 308 k 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: grpc-data noarch 1.46.7-10.el9 epel 19 k 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: ledmon-libs x86_64 1.1.0-3.el9 baseos 40 k 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: libarrow x86_64 9.0.0-15.el9 epel 4.4 M 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: libarrow-doc noarch 9.0.0-15.el9 epel 25 k 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 163 k 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: libconfig x86_64 1.7.2-9.el9 baseos 72 k 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: libgfortran x86_64 11.5.0-14.el9 baseos 794 k 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: libnbd x86_64 1.20.3-4.el9 appstream 164 k 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: liboath x86_64 2.6.12-1.el9 epel 49 k 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: libpmemobj x86_64 1.12.1-1.el9 appstream 160 k 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: libquadmath x86_64 11.5.0-14.el9 baseos 184 k 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: librabbitmq x86_64 0.11.0-7.el9 appstream 45 k 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 503 k 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: librdkafka x86_64 1.6.1-102.el9 appstream 662 k 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.4 M 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: libstoragemgmt x86_64 1.10.1-1.el9 appstream 246 k 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: libunwind x86_64 1.6.2-1.el9 epel 67 k 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: libxslt x86_64 1.1.34-12.el9 appstream 233 k 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: lttng-ust x86_64 2.12.0-6.el9 appstream 292 k 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: lua x86_64 5.4.4-4.el9 appstream 188 k 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: lua-devel x86_64 5.4.4-4.el9 crb 22 k 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: luarocks noarch 3.9.2-5.el9 epel 151 k 2026-03-10T13:20:10.607 INFO:teuthology.orchestra.run.vm02.stdout: mailcap noarch 2.1.49-5.el9 baseos 33 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: openblas x86_64 0.3.29-1.el9 appstream 42 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: openblas-openmp x86_64 0.3.29-1.el9 appstream 5.3 M 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: parquet-libs x86_64 9.0.0-15.el9 epel 838 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: pciutils x86_64 3.7.0-7.el9 baseos 93 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: protobuf x86_64 3.14.0-17.el9 appstream 1.0 M 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: protobuf-compiler x86_64 3.14.0-17.el9 crb 862 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-asyncssh noarch 2.13.2-5.el9 epel 548 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-autocommand noarch 2.2.2-8.el9 epel 29 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-babel noarch 2.9.1-2.el9 appstream 6.0 M 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 epel 60 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-bcrypt x86_64 3.2.2-1.el9 epel 43 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-cachetools noarch 4.2.4-1.el9 epel 32 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 45 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 142 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-certifi noarch 2023.05.07-4.el9 epel 14 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-cffi x86_64 1.14.5-5.el9 baseos 253 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-cheroot noarch 10.0.1-4.el9 epel 173 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-cherrypy noarch 18.6.1-2.el9 epel 358 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-cryptography x86_64 36.0.1-5.el9 baseos 1.2 M 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-devel x86_64 3.9.25-3.el9 appstream 244 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-google-auth noarch 1:2.45.0-1.el9 epel 254 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-grpcio x86_64 1.46.7-10.el9 epel 2.0 M 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 epel 144 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco noarch 8.2.1-3.el9 epel 11 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 epel 18 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 epel 23 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-context noarch 6.0.1-3.el9 epel 20 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 epel 19 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-text noarch 4.0.0-2.el9 epel 26 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-jinja2 noarch 2.11.3-8.el9 appstream 249 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 epel 1.0 M 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 appstream 177 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-logutils noarch 0.3.5-21.el9 epel 46 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-mako noarch 1.1.4-6.el9 appstream 172 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-markupsafe x86_64 1.1.1-12.el9 appstream 35 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-more-itertools noarch 8.12.0-2.el9 epel 79 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-natsort noarch 7.1.1-5.el9 epel 58 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-numpy x86_64 1:1.23.5-2.el9 appstream 6.1 M 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 appstream 442 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-packaging noarch 20.9-5.el9 appstream 77 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-pecan noarch 1.4.2-3.el9 epel 272 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-ply noarch 3.11-14.el9 baseos 106 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-portend noarch 3.1.0-2.el9 epel 16 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-protobuf noarch 3.14.0-17.el9 appstream 267 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 epel 90 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyasn1 noarch 0.4.8-7.el9 appstream 157 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 appstream 277 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-pycparser noarch 2.20-6.el9 baseos 135 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-repoze-lru noarch 0.7-16.el9 epel 31 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-requests noarch 2.25.1-10.el9 baseos 126 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 appstream 54 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-routes noarch 2.5.1-5.el9 epel 188 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-rsa noarch 4.9-2.el9 epel 59 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-scipy x86_64 1.9.3-2.el9 appstream 19 M 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-tempora noarch 5.0.0-2.el9 epel 36 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-toml noarch 0.10.2-6.el9 appstream 42 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-typing-extensions noarch 4.15.0-1.el9 epel 86 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-urllib3 noarch 1.26.5-7.el9 baseos 218 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-webob noarch 1.8.8-2.el9 epel 230 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-websocket-client noarch 1.2.3-2.el9 epel 90 k 2026-03-10T13:20:10.608 INFO:teuthology.orchestra.run.vm02.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 epel 427 k 2026-03-10T13:20:10.609 INFO:teuthology.orchestra.run.vm02.stdout: python3-zc-lockfile noarch 2.0-10.el9 epel 20 k 2026-03-10T13:20:10.609 INFO:teuthology.orchestra.run.vm02.stdout: qatlib x86_64 25.08.0-2.el9 appstream 240 k 2026-03-10T13:20:10.609 INFO:teuthology.orchestra.run.vm02.stdout: qatzip-libs x86_64 1.3.1-1.el9 appstream 66 k 2026-03-10T13:20:10.609 INFO:teuthology.orchestra.run.vm02.stdout: re2 x86_64 1:20211101-20.el9 epel 191 k 2026-03-10T13:20:10.609 INFO:teuthology.orchestra.run.vm02.stdout: socat x86_64 1.7.4.1-8.el9 appstream 303 k 2026-03-10T13:20:10.609 INFO:teuthology.orchestra.run.vm02.stdout: thrift x86_64 0.15.0-4.el9 epel 1.6 M 2026-03-10T13:20:10.609 INFO:teuthology.orchestra.run.vm02.stdout: unzip x86_64 6.0-59.el9 baseos 182 k 2026-03-10T13:20:10.609 INFO:teuthology.orchestra.run.vm02.stdout: xmlstarlet x86_64 1.6.1-20.el9 appstream 64 k 2026-03-10T13:20:10.609 INFO:teuthology.orchestra.run.vm02.stdout: zip x86_64 3.0-35.el9 baseos 266 k 2026-03-10T13:20:10.609 INFO:teuthology.orchestra.run.vm02.stdout:Installing weak dependencies: 2026-03-10T13:20:10.609 INFO:teuthology.orchestra.run.vm02.stdout: qatlib-service x86_64 25.08.0-2.el9 appstream 37 k 2026-03-10T13:20:10.609 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:20:10.609 INFO:teuthology.orchestra.run.vm02.stdout:Transaction Summary 2026-03-10T13:20:10.609 INFO:teuthology.orchestra.run.vm02.stdout:====================================================================================== 2026-03-10T13:20:10.609 INFO:teuthology.orchestra.run.vm02.stdout:Install 134 Packages 2026-03-10T13:20:10.609 INFO:teuthology.orchestra.run.vm02.stdout:Upgrade 2 Packages 2026-03-10T13:20:10.609 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:20:10.609 INFO:teuthology.orchestra.run.vm02.stdout:Total download size: 210 M 2026-03-10T13:20:10.609 INFO:teuthology.orchestra.run.vm02.stdout:Downloading Packages: 2026-03-10T13:20:11.990 INFO:teuthology.orchestra.run.vm02.stdout:(1/136): ceph-19.2.3-678.ge911bdeb.el9.x86_64.r 14 kB/s | 6.5 kB 00:00 2026-03-10T13:20:12.804 INFO:teuthology.orchestra.run.vm02.stdout:(2/136): ceph-fuse-19.2.3-678.ge911bdeb.el9.x86 1.4 MB/s | 1.2 MB 00:00 2026-03-10T13:20:12.923 INFO:teuthology.orchestra.run.vm02.stdout:(3/136): ceph-immutable-object-cache-19.2.3-678 1.2 MB/s | 145 kB 00:00 2026-03-10T13:20:13.098 INFO:teuthology.orchestra.run.vm02.stdout:(4/136): ceph-base-19.2.3-678.ge911bdeb.el9.x86 3.5 MB/s | 5.5 MB 00:01 2026-03-10T13:20:13.175 INFO:teuthology.orchestra.run.vm02.stdout:(5/136): ceph-mds-19.2.3-678.ge911bdeb.el9.x86_ 9.6 MB/s | 2.4 MB 00:00 2026-03-10T13:20:13.233 INFO:teuthology.orchestra.run.vm02.stdout:(6/136): ceph-mgr-19.2.3-678.ge911bdeb.el9.x86_ 8.0 MB/s | 1.1 MB 00:00 2026-03-10T13:20:13.554 INFO:teuthology.orchestra.run.vm02.stdout:(7/136): ceph-mon-19.2.3-678.ge911bdeb.el9.x86_ 13 MB/s | 4.7 MB 00:00 2026-03-10T13:20:14.034 INFO:teuthology.orchestra.run.vm02.stdout:(8/136): ceph-common-19.2.3-678.ge911bdeb.el9.x 8.7 MB/s | 22 MB 00:02 2026-03-10T13:20:14.161 INFO:teuthology.orchestra.run.vm02.stdout:(9/136): ceph-selinux-19.2.3-678.ge911bdeb.el9. 197 kB/s | 25 kB 00:00 2026-03-10T13:20:14.305 INFO:teuthology.orchestra.run.vm02.stdout:(10/136): ceph-radosgw-19.2.3-678.ge911bdeb.el9 14 MB/s | 11 MB 00:00 2026-03-10T13:20:14.431 INFO:teuthology.orchestra.run.vm02.stdout:(11/136): libcephfs-devel-19.2.3-678.ge911bdeb. 268 kB/s | 34 kB 00:00 2026-03-10T13:20:14.491 INFO:teuthology.orchestra.run.vm02.stdout:(12/136): ceph-osd-19.2.3-678.ge911bdeb.el9.x86 14 MB/s | 17 MB 00:01 2026-03-10T13:20:14.561 INFO:teuthology.orchestra.run.vm02.stdout:(13/136): libcephfs2-19.2.3-678.ge911bdeb.el9.x 7.5 MB/s | 1.0 MB 00:00 2026-03-10T13:20:14.613 INFO:teuthology.orchestra.run.vm02.stdout:(14/136): libcephsqlite-19.2.3-678.ge911bdeb.el 1.3 MB/s | 163 kB 00:00 2026-03-10T13:20:14.678 INFO:teuthology.orchestra.run.vm02.stdout:(15/136): librados-devel-19.2.3-678.ge911bdeb.e 1.1 MB/s | 127 kB 00:00 2026-03-10T13:20:15.278 INFO:teuthology.orchestra.run.vm02.stdout:(16/136): librgw2-19.2.3-678.ge911bdeb.el9.x86_ 9.0 MB/s | 5.4 MB 00:00 2026-03-10T13:20:15.395 INFO:teuthology.orchestra.run.vm02.stdout:(17/136): python3-ceph-argparse-19.2.3-678.ge91 386 kB/s | 45 kB 00:00 2026-03-10T13:20:15.514 INFO:teuthology.orchestra.run.vm02.stdout:(18/136): python3-ceph-common-19.2.3-678.ge911b 1.2 MB/s | 142 kB 00:00 2026-03-10T13:20:15.631 INFO:teuthology.orchestra.run.vm02.stdout:(19/136): python3-cephfs-19.2.3-678.ge911bdeb.e 1.4 MB/s | 165 kB 00:00 2026-03-10T13:20:15.735 INFO:teuthology.orchestra.run.vm02.stdout:(20/136): libradosstriper1-19.2.3-678.ge911bdeb 449 kB/s | 503 kB 00:01 2026-03-10T13:20:15.751 INFO:teuthology.orchestra.run.vm02.stdout:(21/136): python3-rados-19.2.3-678.ge911bdeb.el 2.6 MB/s | 323 kB 00:00 2026-03-10T13:20:15.859 INFO:teuthology.orchestra.run.vm02.stdout:(22/136): python3-rbd-19.2.3-678.ge911bdeb.el9. 2.4 MB/s | 303 kB 00:00 2026-03-10T13:20:15.867 INFO:teuthology.orchestra.run.vm02.stdout:(23/136): python3-rgw-19.2.3-678.ge911bdeb.el9. 854 kB/s | 100 kB 00:00 2026-03-10T13:20:15.980 INFO:teuthology.orchestra.run.vm02.stdout:(24/136): rbd-fuse-19.2.3-678.ge911bdeb.el9.x86 702 kB/s | 85 kB 00:00 2026-03-10T13:20:16.104 INFO:teuthology.orchestra.run.vm02.stdout:(25/136): rbd-nbd-19.2.3-678.ge911bdeb.el9.x86_ 1.4 MB/s | 171 kB 00:00 2026-03-10T13:20:16.225 INFO:teuthology.orchestra.run.vm02.stdout:(26/136): rbd-mirror-19.2.3-678.ge911bdeb.el9.x 8.7 MB/s | 3.1 MB 00:00 2026-03-10T13:20:16.226 INFO:teuthology.orchestra.run.vm02.stdout:(27/136): ceph-grafana-dashboards-19.2.3-678.ge 253 kB/s | 31 kB 00:00 2026-03-10T13:20:16.344 INFO:teuthology.orchestra.run.vm02.stdout:(28/136): ceph-mgr-cephadm-19.2.3-678.ge911bdeb 1.2 MB/s | 150 kB 00:00 2026-03-10T13:20:16.628 INFO:teuthology.orchestra.run.vm02.stdout:(29/136): ceph-mgr-dashboard-19.2.3-678.ge911bd 9.5 MB/s | 3.8 MB 00:00 2026-03-10T13:20:16.751 INFO:teuthology.orchestra.run.vm02.stdout:(30/136): ceph-mgr-modules-core-19.2.3-678.ge91 2.0 MB/s | 253 kB 00:00 2026-03-10T13:20:16.872 INFO:teuthology.orchestra.run.vm02.stdout:(31/136): ceph-mgr-rook-19.2.3-678.ge911bdeb.el 407 kB/s | 49 kB 00:00 2026-03-10T13:20:16.954 INFO:teuthology.orchestra.run.vm02.stdout:(32/136): ceph-mgr-diskprediction-local-19.2.3- 12 MB/s | 7.4 MB 00:00 2026-03-10T13:20:16.993 INFO:teuthology.orchestra.run.vm02.stdout:(33/136): ceph-prometheus-alerts-19.2.3-678.ge9 139 kB/s | 17 kB 00:00 2026-03-10T13:20:17.074 INFO:teuthology.orchestra.run.vm02.stdout:(34/136): ceph-volume-19.2.3-678.ge911bdeb.el9. 2.4 MB/s | 299 kB 00:00 2026-03-10T13:20:17.359 INFO:teuthology.orchestra.run.vm02.stdout:(35/136): cephadm-19.2.3-678.ge911bdeb.el9.noar 2.1 MB/s | 769 kB 00:00 2026-03-10T13:20:17.565 INFO:teuthology.orchestra.run.vm02.stdout:(36/136): cryptsetup-2.8.1-3.el9.x86_64.rpm 716 kB/s | 351 kB 00:00 2026-03-10T13:20:17.792 INFO:teuthology.orchestra.run.vm02.stdout:(37/136): ledmon-libs-1.1.0-3.el9.x86_64.rpm 94 kB/s | 40 kB 00:00 2026-03-10T13:20:17.918 INFO:teuthology.orchestra.run.vm02.stdout:(38/136): libconfig-1.7.2-9.el9.x86_64.rpm 204 kB/s | 72 kB 00:00 2026-03-10T13:20:18.555 INFO:teuthology.orchestra.run.vm02.stdout:(39/136): libquadmath-11.5.0-14.el9.x86_64.rpm 290 kB/s | 184 kB 00:00 2026-03-10T13:20:18.593 INFO:teuthology.orchestra.run.vm02.stdout:(40/136): mailcap-2.1.49-5.el9.noarch.rpm 899 kB/s | 33 kB 00:00 2026-03-10T13:20:18.764 INFO:teuthology.orchestra.run.vm02.stdout:(41/136): libgfortran-11.5.0-14.el9.x86_64.rpm 817 kB/s | 794 kB 00:00 2026-03-10T13:20:18.832 INFO:teuthology.orchestra.run.vm02.stdout:(42/136): pciutils-3.7.0-7.el9.x86_64.rpm 390 kB/s | 93 kB 00:00 2026-03-10T13:20:18.959 INFO:teuthology.orchestra.run.vm02.stdout:(43/136): python3-cffi-1.14.5-5.el9.x86_64.rpm 1.3 MB/s | 253 kB 00:00 2026-03-10T13:20:19.199 INFO:teuthology.orchestra.run.vm02.stdout:(44/136): python3-ply-3.11-14.el9.noarch.rpm 443 kB/s | 106 kB 00:00 2026-03-10T13:20:19.254 INFO:teuthology.orchestra.run.vm02.stdout:(45/136): python3-cryptography-36.0.1-5.el9.x86 3.0 MB/s | 1.2 MB 00:00 2026-03-10T13:20:19.422 INFO:teuthology.orchestra.run.vm02.stdout:(46/136): python3-pycparser-2.20-6.el9.noarch.r 608 kB/s | 135 kB 00:00 2026-03-10T13:20:19.715 INFO:teuthology.orchestra.run.vm02.stdout:(47/136): ceph-test-19.2.3-678.ge911bdeb.el9.x8 8.9 MB/s | 50 MB 00:05 2026-03-10T13:20:19.716 INFO:teuthology.orchestra.run.vm02.stdout:(48/136): python3-requests-2.25.1-10.el9.noarch 273 kB/s | 126 kB 00:00 2026-03-10T13:20:19.718 INFO:teuthology.orchestra.run.vm02.stdout:(49/136): python3-urllib3-1.26.5-7.el9.noarch.r 737 kB/s | 218 kB 00:00 2026-03-10T13:20:19.996 INFO:teuthology.orchestra.run.vm02.stdout:(50/136): boost-program-options-1.75.0-13.el9.x 374 kB/s | 104 kB 00:00 2026-03-10T13:20:20.053 INFO:teuthology.orchestra.run.vm02.stdout:(51/136): flexiblas-3.0.4-9.el9.x86_64.rpm 523 kB/s | 30 kB 00:00 2026-03-10T13:20:20.068 INFO:teuthology.orchestra.run.vm02.stdout:(52/136): unzip-6.0-59.el9.x86_64.rpm 517 kB/s | 182 kB 00:00 2026-03-10T13:20:20.206 INFO:teuthology.orchestra.run.vm02.stdout:(53/136): zip-3.0-35.el9.x86_64.rpm 543 kB/s | 266 kB 00:00 2026-03-10T13:20:20.235 INFO:teuthology.orchestra.run.vm02.stdout:(54/136): flexiblas-openblas-openmp-3.0.4-9.el9 89 kB/s | 15 kB 00:00 2026-03-10T13:20:20.350 INFO:teuthology.orchestra.run.vm02.stdout:(55/136): flexiblas-netlib-3.0.4-9.el9.x86_64.r 10 MB/s | 3.0 MB 00:00 2026-03-10T13:20:20.404 INFO:teuthology.orchestra.run.vm02.stdout:(56/136): libpmemobj-1.12.1-1.el9.x86_64.rpm 951 kB/s | 160 kB 00:00 2026-03-10T13:20:20.407 INFO:teuthology.orchestra.run.vm02.stdout:(57/136): librabbitmq-0.11.0-7.el9.x86_64.rpm 797 kB/s | 45 kB 00:00 2026-03-10T13:20:20.466 INFO:teuthology.orchestra.run.vm02.stdout:(58/136): libstoragemgmt-1.10.1-1.el9.x86_64.rp 4.1 MB/s | 246 kB 00:00 2026-03-10T13:20:20.525 INFO:teuthology.orchestra.run.vm02.stdout:(59/136): libxslt-1.1.34-12.el9.x86_64.rpm 3.9 MB/s | 233 kB 00:00 2026-03-10T13:20:20.526 INFO:teuthology.orchestra.run.vm02.stdout:(60/136): libnbd-1.20.3-4.el9.x86_64.rpm 513 kB/s | 164 kB 00:00 2026-03-10T13:20:20.574 INFO:teuthology.orchestra.run.vm02.stdout:(61/136): librdkafka-1.6.1-102.el9.x86_64.rpm 3.8 MB/s | 662 kB 00:00 2026-03-10T13:20:20.585 INFO:teuthology.orchestra.run.vm02.stdout:(62/136): lttng-ust-2.12.0-6.el9.x86_64.rpm 4.8 MB/s | 292 kB 00:00 2026-03-10T13:20:20.631 INFO:teuthology.orchestra.run.vm02.stdout:(63/136): openblas-0.3.29-1.el9.x86_64.rpm 740 kB/s | 42 kB 00:00 2026-03-10T13:20:20.633 INFO:teuthology.orchestra.run.vm02.stdout:(64/136): lua-5.4.4-4.el9.x86_64.rpm 1.7 MB/s | 188 kB 00:00 2026-03-10T13:20:20.751 INFO:teuthology.orchestra.run.vm02.stdout:(65/136): protobuf-3.14.0-17.el9.x86_64.rpm 8.4 MB/s | 1.0 MB 00:00 2026-03-10T13:20:20.811 INFO:teuthology.orchestra.run.vm02.stdout:(66/136): python3-devel-3.9.25-3.el9.x86_64.rpm 4.0 MB/s | 244 kB 00:00 2026-03-10T13:20:20.870 INFO:teuthology.orchestra.run.vm02.stdout:(67/136): python3-jinja2-2.11.3-8.el9.noarch.rp 4.1 MB/s | 249 kB 00:00 2026-03-10T13:20:20.928 INFO:teuthology.orchestra.run.vm02.stdout:(68/136): python3-babel-2.9.1-2.el9.noarch.rpm 20 MB/s | 6.0 MB 00:00 2026-03-10T13:20:20.929 INFO:teuthology.orchestra.run.vm02.stdout:(69/136): python3-jmespath-1.0.1-1.el9.noarch.r 814 kB/s | 48 kB 00:00 2026-03-10T13:20:20.984 INFO:teuthology.orchestra.run.vm02.stdout:(70/136): python3-libstoragemgmt-1.10.1-1.el9.x 3.1 MB/s | 177 kB 00:00 2026-03-10T13:20:21.003 INFO:teuthology.orchestra.run.vm02.stdout:(71/136): openblas-openmp-0.3.29-1.el9.x86_64.r 13 MB/s | 5.3 MB 00:00 2026-03-10T13:20:21.005 INFO:teuthology.orchestra.run.vm02.stdout:(72/136): python3-mako-1.1.4-6.el9.noarch.rpm 2.2 MB/s | 172 kB 00:00 2026-03-10T13:20:21.041 INFO:teuthology.orchestra.run.vm02.stdout:(73/136): python3-markupsafe-1.1.1-12.el9.x86_6 610 kB/s | 35 kB 00:00 2026-03-10T13:20:21.098 INFO:teuthology.orchestra.run.vm02.stdout:(74/136): python3-packaging-20.9-5.el9.noarch.r 1.3 MB/s | 77 kB 00:00 2026-03-10T13:20:21.158 INFO:teuthology.orchestra.run.vm02.stdout:(75/136): python3-protobuf-3.14.0-17.el9.noarch 4.4 MB/s | 267 kB 00:00 2026-03-10T13:20:21.263 INFO:teuthology.orchestra.run.vm02.stdout:(76/136): python3-pyasn1-0.4.8-7.el9.noarch.rpm 1.5 MB/s | 157 kB 00:00 2026-03-10T13:20:21.341 INFO:teuthology.orchestra.run.vm02.stdout:(77/136): python3-pyasn1-modules-0.4.8-7.el9.no 3.5 MB/s | 277 kB 00:00 2026-03-10T13:20:21.391 INFO:teuthology.orchestra.run.vm02.stdout:(78/136): python3-numpy-1.23.5-2.el9.x86_64.rpm 16 MB/s | 6.1 MB 00:00 2026-03-10T13:20:21.398 INFO:teuthology.orchestra.run.vm02.stdout:(79/136): python3-requests-oauthlib-1.3.0-12.el 943 kB/s | 54 kB 00:00 2026-03-10T13:20:21.471 INFO:teuthology.orchestra.run.vm02.stdout:(80/136): python3-toml-0.10.2-6.el9.noarch.rpm 569 kB/s | 42 kB 00:00 2026-03-10T13:20:21.561 INFO:teuthology.orchestra.run.vm02.stdout:(81/136): qatlib-25.08.0-2.el9.x86_64.rpm 2.6 MB/s | 240 kB 00:00 2026-03-10T13:20:21.639 INFO:teuthology.orchestra.run.vm02.stdout:(82/136): python3-numpy-f2py-1.23.5-2.el9.x86_6 698 kB/s | 442 kB 00:00 2026-03-10T13:20:21.653 INFO:teuthology.orchestra.run.vm02.stdout:(83/136): qatlib-service-25.08.0-2.el9.x86_64.r 402 kB/s | 37 kB 00:00 2026-03-10T13:20:21.741 INFO:teuthology.orchestra.run.vm02.stdout:(84/136): qatzip-libs-1.3.1-1.el9.x86_64.rpm 652 kB/s | 66 kB 00:00 2026-03-10T13:20:21.749 INFO:teuthology.orchestra.run.vm02.stdout:(85/136): socat-1.7.4.1-8.el9.x86_64.rpm 3.1 MB/s | 303 kB 00:00 2026-03-10T13:20:21.803 INFO:teuthology.orchestra.run.vm02.stdout:(86/136): xmlstarlet-1.6.1-20.el9.x86_64.rpm 1.0 MB/s | 64 kB 00:00 2026-03-10T13:20:21.883 INFO:teuthology.orchestra.run.vm02.stdout:(87/136): lua-devel-5.4.4-4.el9.x86_64.rpm 168 kB/s | 22 kB 00:00 2026-03-10T13:20:21.963 INFO:teuthology.orchestra.run.vm02.stdout:(88/136): abseil-cpp-20211102.0-4.el9.x86_64.rp 6.8 MB/s | 551 kB 00:00 2026-03-10T13:20:21.997 INFO:teuthology.orchestra.run.vm02.stdout:(89/136): gperftools-libs-2.9.1-3.el9.x86_64.rp 9.1 MB/s | 308 kB 00:00 2026-03-10T13:20:21.999 INFO:teuthology.orchestra.run.vm02.stdout:(90/136): grpc-data-1.46.7-10.el9.noarch.rpm 9.0 MB/s | 19 kB 00:00 2026-03-10T13:20:22.034 INFO:teuthology.orchestra.run.vm02.stdout:(91/136): protobuf-compiler-3.14.0-17.el9.x86_6 3.7 MB/s | 862 kB 00:00 2026-03-10T13:20:22.041 INFO:teuthology.orchestra.run.vm02.stdout:(92/136): libarrow-doc-9.0.0-15.el9.noarch.rpm 3.2 MB/s | 25 kB 00:00 2026-03-10T13:20:22.048 INFO:teuthology.orchestra.run.vm02.stdout:(93/136): liboath-2.6.12-1.el9.x86_64.rpm 7.2 MB/s | 49 kB 00:00 2026-03-10T13:20:22.056 INFO:teuthology.orchestra.run.vm02.stdout:(94/136): libunwind-1.6.2-1.el9.x86_64.rpm 9.0 MB/s | 67 kB 00:00 2026-03-10T13:20:22.069 INFO:teuthology.orchestra.run.vm02.stdout:(95/136): libarrow-9.0.0-15.el9.x86_64.rpm 63 MB/s | 4.4 MB 00:00 2026-03-10T13:20:22.072 INFO:teuthology.orchestra.run.vm02.stdout:(96/136): luarocks-3.9.2-5.el9.noarch.rpm 9.6 MB/s | 151 kB 00:00 2026-03-10T13:20:22.083 INFO:teuthology.orchestra.run.vm02.stdout:(97/136): parquet-libs-9.0.0-15.el9.x86_64.rpm 62 MB/s | 838 kB 00:00 2026-03-10T13:20:22.086 INFO:teuthology.orchestra.run.vm02.stdout:(98/136): python3-autocommand-2.2.2-8.el9.noarc 8.6 MB/s | 29 kB 00:00 2026-03-10T13:20:22.088 INFO:teuthology.orchestra.run.vm02.stdout:(99/136): python3-asyncssh-2.13.2-5.el9.noarch. 33 MB/s | 548 kB 00:00 2026-03-10T13:20:22.089 INFO:teuthology.orchestra.run.vm02.stdout:(100/136): python3-backports-tarfile-1.2.0-1.el 21 MB/s | 60 kB 00:00 2026-03-10T13:20:22.091 INFO:teuthology.orchestra.run.vm02.stdout:(101/136): python3-bcrypt-3.2.2-1.el9.x86_64.rp 19 MB/s | 43 kB 00:00 2026-03-10T13:20:22.091 INFO:teuthology.orchestra.run.vm02.stdout:(102/136): python3-cachetools-4.2.4-1.el9.noarc 14 MB/s | 32 kB 00:00 2026-03-10T13:20:22.093 INFO:teuthology.orchestra.run.vm02.stdout:(103/136): python3-certifi-2023.05.07-4.el9.noa 6.7 MB/s | 14 kB 00:00 2026-03-10T13:20:22.095 INFO:teuthology.orchestra.run.vm02.stdout:(104/136): python3-cheroot-10.0.1-4.el9.noarch. 47 MB/s | 173 kB 00:00 2026-03-10T13:20:22.101 INFO:teuthology.orchestra.run.vm02.stdout:(105/136): python3-google-auth-2.45.0-1.el9.noa 49 MB/s | 254 kB 00:00 2026-03-10T13:20:22.102 INFO:teuthology.orchestra.run.vm02.stdout:(106/136): python3-cherrypy-18.6.1-2.el9.noarch 38 MB/s | 358 kB 00:00 2026-03-10T13:20:22.115 INFO:teuthology.orchestra.run.vm02.stdout:(107/136): python3-grpcio-tools-1.46.7-10.el9.x 11 MB/s | 144 kB 00:00 2026-03-10T13:20:22.119 INFO:teuthology.orchestra.run.vm02.stdout:(108/136): python3-jaraco-8.2.1-3.el9.noarch.rp 3.0 MB/s | 11 kB 00:00 2026-03-10T13:20:22.123 INFO:teuthology.orchestra.run.vm02.stdout:(109/136): python3-jaraco-classes-3.2.1-5.el9.n 4.1 MB/s | 18 kB 00:00 2026-03-10T13:20:22.127 INFO:teuthology.orchestra.run.vm02.stdout:(110/136): python3-jaraco-collections-3.0.0-8.e 6.8 MB/s | 23 kB 00:00 2026-03-10T13:20:22.133 INFO:teuthology.orchestra.run.vm02.stdout:(111/136): python3-grpcio-1.46.7-10.el9.x86_64. 63 MB/s | 2.0 MB 00:00 2026-03-10T13:20:22.134 INFO:teuthology.orchestra.run.vm02.stdout:(112/136): python3-jaraco-context-6.0.1-3.el9.n 2.8 MB/s | 20 kB 00:00 2026-03-10T13:20:22.135 INFO:teuthology.orchestra.run.vm02.stdout:(113/136): python3-jaraco-functools-3.5.0-2.el9 9.4 MB/s | 19 kB 00:00 2026-03-10T13:20:22.136 INFO:teuthology.orchestra.run.vm02.stdout:(114/136): python3-jaraco-text-4.0.0-2.el9.noar 11 MB/s | 26 kB 00:00 2026-03-10T13:20:22.140 INFO:teuthology.orchestra.run.vm02.stdout:(115/136): python3-logutils-0.3.5-21.el9.noarch 12 MB/s | 46 kB 00:00 2026-03-10T13:20:22.144 INFO:teuthology.orchestra.run.vm02.stdout:(116/136): python3-more-itertools-8.12.0-2.el9. 20 MB/s | 79 kB 00:00 2026-03-10T13:20:22.148 INFO:teuthology.orchestra.run.vm02.stdout:(117/136): python3-natsort-7.1.1-5.el9.noarch.r 16 MB/s | 58 kB 00:00 2026-03-10T13:20:22.151 INFO:teuthology.orchestra.run.vm02.stdout:(118/136): python3-kubernetes-26.1.0-3.el9.noar 66 MB/s | 1.0 MB 00:00 2026-03-10T13:20:22.155 INFO:teuthology.orchestra.run.vm02.stdout:(119/136): python3-pecan-1.4.2-3.el9.noarch.rpm 39 MB/s | 272 kB 00:00 2026-03-10T13:20:22.155 INFO:teuthology.orchestra.run.vm02.stdout:(120/136): python3-portend-3.1.0-2.el9.noarch.r 3.7 MB/s | 16 kB 00:00 2026-03-10T13:20:22.159 INFO:teuthology.orchestra.run.vm02.stdout:(121/136): python3-pyOpenSSL-21.0.0-1.el9.noarc 22 MB/s | 90 kB 00:00 2026-03-10T13:20:22.160 INFO:teuthology.orchestra.run.vm02.stdout:(122/136): python3-repoze-lru-0.7-16.el9.noarch 6.7 MB/s | 31 kB 00:00 2026-03-10T13:20:22.163 INFO:teuthology.orchestra.run.vm02.stdout:(123/136): python3-routes-2.5.1-5.el9.noarch.rp 46 MB/s | 188 kB 00:00 2026-03-10T13:20:22.164 INFO:teuthology.orchestra.run.vm02.stdout:(124/136): python3-rsa-4.9-2.el9.noarch.rpm 15 MB/s | 59 kB 00:00 2026-03-10T13:20:22.166 INFO:teuthology.orchestra.run.vm02.stdout:(125/136): python3-tempora-5.0.0-2.el9.noarch.r 16 MB/s | 36 kB 00:00 2026-03-10T13:20:22.167 INFO:teuthology.orchestra.run.vm02.stdout:(126/136): python3-typing-extensions-4.15.0-1.e 30 MB/s | 86 kB 00:00 2026-03-10T13:20:22.170 INFO:teuthology.orchestra.run.vm02.stdout:(127/136): python3-webob-1.8.8-2.el9.noarch.rpm 51 MB/s | 230 kB 00:00 2026-03-10T13:20:22.172 INFO:teuthology.orchestra.run.vm02.stdout:(128/136): python3-websocket-client-1.2.3-2.el9 18 MB/s | 90 kB 00:00 2026-03-10T13:20:22.175 INFO:teuthology.orchestra.run.vm02.stdout:(129/136): python3-xmltodict-0.12.0-15.el9.noar 6.9 MB/s | 22 kB 00:00 2026-03-10T13:20:22.178 INFO:teuthology.orchestra.run.vm02.stdout:(130/136): python3-werkzeug-2.0.3-3.el9.1.noarc 57 MB/s | 427 kB 00:00 2026-03-10T13:20:22.179 INFO:teuthology.orchestra.run.vm02.stdout:(131/136): python3-zc-lockfile-2.0-10.el9.noarc 6.6 MB/s | 20 kB 00:00 2026-03-10T13:20:22.182 INFO:teuthology.orchestra.run.vm02.stdout:(132/136): re2-20211101-20.el9.x86_64.rpm 42 MB/s | 191 kB 00:00 2026-03-10T13:20:22.201 INFO:teuthology.orchestra.run.vm02.stdout:(133/136): thrift-0.15.0-4.el9.x86_64.rpm 71 MB/s | 1.6 MB 00:00 2026-03-10T13:20:22.803 INFO:teuthology.orchestra.run.vm02.stdout:(134/136): python3-scipy-1.9.3-2.el9.x86_64.rpm 14 MB/s | 19 MB 00:01 2026-03-10T13:20:23.271 INFO:teuthology.orchestra.run.vm02.stdout:(135/136): librados2-19.2.3-678.ge911bdeb.el9.x 3.2 MB/s | 3.4 MB 00:01 2026-03-10T13:20:23.292 INFO:teuthology.orchestra.run.vm02.stdout:(136/136): librbd1-19.2.3-678.ge911bdeb.el9.x86 2.9 MB/s | 3.2 MB 00:01 2026-03-10T13:20:23.295 INFO:teuthology.orchestra.run.vm02.stdout:-------------------------------------------------------------------------------- 2026-03-10T13:20:23.295 INFO:teuthology.orchestra.run.vm02.stdout:Total 17 MB/s | 210 MB 00:12 2026-03-10T13:20:23.925 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction check 2026-03-10T13:20:23.982 INFO:teuthology.orchestra.run.vm02.stdout:Transaction check succeeded. 2026-03-10T13:20:23.982 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction test 2026-03-10T13:20:24.879 INFO:teuthology.orchestra.run.vm02.stdout:Transaction test succeeded. 2026-03-10T13:20:24.879 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction 2026-03-10T13:20:25.887 INFO:teuthology.orchestra.run.vm02.stdout: Preparing : 1/1 2026-03-10T13:20:25.932 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-more-itertools-8.12.0-2.el9.noarch 1/138 2026-03-10T13:20:25.954 INFO:teuthology.orchestra.run.vm02.stdout: Installing : thrift-0.15.0-4.el9.x86_64 2/138 2026-03-10T13:20:26.144 INFO:teuthology.orchestra.run.vm02.stdout: Installing : lttng-ust-2.12.0-6.el9.x86_64 3/138 2026-03-10T13:20:26.147 INFO:teuthology.orchestra.run.vm02.stdout: Upgrading : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-10T13:20:26.212 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-10T13:20:26.214 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/138 2026-03-10T13:20:26.246 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/138 2026-03-10T13:20:26.256 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 6/138 2026-03-10T13:20:26.261 INFO:teuthology.orchestra.run.vm02.stdout: Installing : librdkafka-1.6.1-102.el9.x86_64 7/138 2026-03-10T13:20:26.264 INFO:teuthology.orchestra.run.vm02.stdout: Installing : librabbitmq-0.11.0-7.el9.x86_64 8/138 2026-03-10T13:20:26.270 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-jaraco-8.2.1-3.el9.noarch 9/138 2026-03-10T13:20:26.281 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libnbd-1.20.3-4.el9.x86_64 10/138 2026-03-10T13:20:26.283 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-10T13:20:26.322 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-10T13:20:26.324 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 12/138 2026-03-10T13:20:26.346 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 12/138 2026-03-10T13:20:26.386 INFO:teuthology.orchestra.run.vm02.stdout: Installing : re2-1:20211101-20.el9.x86_64 13/138 2026-03-10T13:20:26.427 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libarrow-9.0.0-15.el9.x86_64 14/138 2026-03-10T13:20:26.434 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-werkzeug-2.0.3-3.el9.1.noarch 15/138 2026-03-10T13:20:26.462 INFO:teuthology.orchestra.run.vm02.stdout: Installing : liboath-2.6.12-1.el9.x86_64 16/138 2026-03-10T13:20:26.477 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-pyasn1-0.4.8-7.el9.noarch 17/138 2026-03-10T13:20:26.486 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-packaging-20.9-5.el9.noarch 18/138 2026-03-10T13:20:26.498 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-markupsafe-1.1.1-12.el9.x86_64 19/138 2026-03-10T13:20:26.506 INFO:teuthology.orchestra.run.vm02.stdout: Installing : protobuf-3.14.0-17.el9.x86_64 20/138 2026-03-10T13:20:26.511 INFO:teuthology.orchestra.run.vm02.stdout: Installing : lua-5.4.4-4.el9.x86_64 21/138 2026-03-10T13:20:26.517 INFO:teuthology.orchestra.run.vm02.stdout: Installing : flexiblas-3.0.4-9.el9.x86_64 22/138 2026-03-10T13:20:26.548 INFO:teuthology.orchestra.run.vm02.stdout: Installing : unzip-6.0-59.el9.x86_64 23/138 2026-03-10T13:20:26.566 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-urllib3-1.26.5-7.el9.noarch 24/138 2026-03-10T13:20:26.572 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-requests-2.25.1-10.el9.noarch 25/138 2026-03-10T13:20:26.581 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libquadmath-11.5.0-14.el9.x86_64 26/138 2026-03-10T13:20:26.585 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libgfortran-11.5.0-14.el9.x86_64 27/138 2026-03-10T13:20:26.621 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ledmon-libs-1.1.0-3.el9.x86_64 28/138 2026-03-10T13:20:26.629 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 29/138 2026-03-10T13:20:26.640 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 30/138 2026-03-10T13:20:26.656 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 31/138 2026-03-10T13:20:26.665 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-requests-oauthlib-1.3.0-12.el9.noarch 32/138 2026-03-10T13:20:26.696 INFO:teuthology.orchestra.run.vm02.stdout: Installing : zip-3.0-35.el9.x86_64 33/138 2026-03-10T13:20:26.703 INFO:teuthology.orchestra.run.vm02.stdout: Installing : luarocks-3.9.2-5.el9.noarch 34/138 2026-03-10T13:20:26.713 INFO:teuthology.orchestra.run.vm02.stdout: Installing : lua-devel-5.4.4-4.el9.x86_64 35/138 2026-03-10T13:20:26.745 INFO:teuthology.orchestra.run.vm02.stdout: Installing : protobuf-compiler-3.14.0-17.el9.x86_64 36/138 2026-03-10T13:20:26.810 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-mako-1.1.4-6.el9.noarch 37/138 2026-03-10T13:20:26.828 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-pyasn1-modules-0.4.8-7.el9.noarch 38/138 2026-03-10T13:20:26.837 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-rsa-4.9-2.el9.noarch 39/138 2026-03-10T13:20:26.849 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-jaraco-classes-3.2.1-5.el9.noarch 40/138 2026-03-10T13:20:26.856 INFO:teuthology.orchestra.run.vm02.stdout: Installing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 41/138 2026-03-10T13:20:26.861 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-zc-lockfile-2.0-10.el9.noarch 42/138 2026-03-10T13:20:26.880 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-xmltodict-0.12.0-15.el9.noarch 43/138 2026-03-10T13:20:26.909 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-websocket-client-1.2.3-2.el9.noarch 44/138 2026-03-10T13:20:26.917 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-webob-1.8.8-2.el9.noarch 45/138 2026-03-10T13:20:26.925 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-typing-extensions-4.15.0-1.el9.noarch 46/138 2026-03-10T13:20:26.941 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-repoze-lru-0.7-16.el9.noarch 47/138 2026-03-10T13:20:26.956 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-routes-2.5.1-5.el9.noarch 48/138 2026-03-10T13:20:26.986 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-natsort-7.1.1-5.el9.noarch 49/138 2026-03-10T13:20:27.057 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-logutils-0.3.5-21.el9.noarch 50/138 2026-03-10T13:20:27.066 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-pecan-1.4.2-3.el9.noarch 51/138 2026-03-10T13:20:27.078 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-certifi-2023.05.07-4.el9.noarch 52/138 2026-03-10T13:20:27.131 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-cachetools-4.2.4-1.el9.noarch 53/138 2026-03-10T13:20:27.541 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-google-auth-1:2.45.0-1.el9.noarch 54/138 2026-03-10T13:20:27.560 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-kubernetes-1:26.1.0-3.el9.noarch 55/138 2026-03-10T13:20:27.567 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-backports-tarfile-1.2.0-1.el9.noarch 56/138 2026-03-10T13:20:27.577 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-jaraco-context-6.0.1-3.el9.noarch 57/138 2026-03-10T13:20:27.582 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-autocommand-2.2.2-8.el9.noarch 58/138 2026-03-10T13:20:27.592 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libunwind-1.6.2-1.el9.x86_64 59/138 2026-03-10T13:20:27.597 INFO:teuthology.orchestra.run.vm02.stdout: Installing : gperftools-libs-2.9.1-3.el9.x86_64 60/138 2026-03-10T13:20:27.602 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libarrow-doc-9.0.0-15.el9.noarch 61/138 2026-03-10T13:20:27.636 INFO:teuthology.orchestra.run.vm02.stdout: Installing : grpc-data-1.46.7-10.el9.noarch 62/138 2026-03-10T13:20:27.693 INFO:teuthology.orchestra.run.vm02.stdout: Installing : abseil-cpp-20211102.0-4.el9.x86_64 63/138 2026-03-10T13:20:27.711 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-grpcio-1.46.7-10.el9.x86_64 64/138 2026-03-10T13:20:27.720 INFO:teuthology.orchestra.run.vm02.stdout: Installing : socat-1.7.4.1-8.el9.x86_64 65/138 2026-03-10T13:20:27.728 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-toml-0.10.2-6.el9.noarch 66/138 2026-03-10T13:20:27.738 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-jaraco-functools-3.5.0-2.el9.noarch 67/138 2026-03-10T13:20:27.745 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-jaraco-text-4.0.0-2.el9.noarch 68/138 2026-03-10T13:20:27.755 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-jaraco-collections-3.0.0-8.el9.noarch 69/138 2026-03-10T13:20:27.761 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-tempora-5.0.0-2.el9.noarch 70/138 2026-03-10T13:20:27.798 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-portend-3.1.0-2.el9.noarch 71/138 2026-03-10T13:20:27.817 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-protobuf-3.14.0-17.el9.noarch 72/138 2026-03-10T13:20:27.873 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-grpcio-tools-1.46.7-10.el9.x86_64 73/138 2026-03-10T13:20:28.170 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-devel-3.9.25-3.el9.x86_64 74/138 2026-03-10T13:20:28.207 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-babel-2.9.1-2.el9.noarch 75/138 2026-03-10T13:20:28.214 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-jinja2-2.11.3-8.el9.noarch 76/138 2026-03-10T13:20:28.287 INFO:teuthology.orchestra.run.vm02.stdout: Installing : openblas-0.3.29-1.el9.x86_64 77/138 2026-03-10T13:20:28.329 INFO:teuthology.orchestra.run.vm02.stdout: Installing : openblas-openmp-0.3.29-1.el9.x86_64 78/138 2026-03-10T13:20:28.363 INFO:teuthology.orchestra.run.vm02.stdout: Installing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 79/138 2026-03-10T13:20:28.795 INFO:teuthology.orchestra.run.vm02.stdout: Installing : flexiblas-netlib-3.0.4-9.el9.x86_64 80/138 2026-03-10T13:20:28.901 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-numpy-1:1.23.5-2.el9.x86_64 81/138 2026-03-10T13:20:29.807 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 82/138 2026-03-10T13:20:29.841 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-scipy-1.9.3-2.el9.x86_64 83/138 2026-03-10T13:20:29.849 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libxslt-1.1.34-12.el9.x86_64 84/138 2026-03-10T13:20:29.855 INFO:teuthology.orchestra.run.vm02.stdout: Installing : xmlstarlet-1.6.1-20.el9.x86_64 85/138 2026-03-10T13:20:30.026 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libpmemobj-1.12.1-1.el9.x86_64 86/138 2026-03-10T13:20:30.030 INFO:teuthology.orchestra.run.vm02.stdout: Upgrading : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 87/138 2026-03-10T13:20:30.068 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 87/138 2026-03-10T13:20:30.073 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 88/138 2026-03-10T13:20:30.082 INFO:teuthology.orchestra.run.vm02.stdout: Installing : boost-program-options-1.75.0-13.el9.x86_64 89/138 2026-03-10T13:20:30.361 INFO:teuthology.orchestra.run.vm02.stdout: Installing : parquet-libs-9.0.0-15.el9.x86_64 90/138 2026-03-10T13:20:30.364 INFO:teuthology.orchestra.run.vm02.stdout: Installing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 91/138 2026-03-10T13:20:30.386 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 91/138 2026-03-10T13:20:30.389 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 92/138 2026-03-10T13:20:31.617 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-10T13:20:31.667 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-10T13:20:31.699 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-10T13:20:31.721 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-ply-3.11-14.el9.noarch 94/138 2026-03-10T13:20:31.745 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-pycparser-2.20-6.el9.noarch 95/138 2026-03-10T13:20:31.849 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-cffi-1.14.5-5.el9.x86_64 96/138 2026-03-10T13:20:31.870 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-cryptography-36.0.1-5.el9.x86_64 97/138 2026-03-10T13:20:31.904 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-pyOpenSSL-21.0.0-1.el9.noarch 98/138 2026-03-10T13:20:31.950 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-cheroot-10.0.1-4.el9.noarch 99/138 2026-03-10T13:20:32.021 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-cherrypy-18.6.1-2.el9.noarch 100/138 2026-03-10T13:20:32.033 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-asyncssh-2.13.2-5.el9.noarch 101/138 2026-03-10T13:20:32.040 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-bcrypt-3.2.2-1.el9.x86_64 102/138 2026-03-10T13:20:32.048 INFO:teuthology.orchestra.run.vm02.stdout: Installing : pciutils-3.7.0-7.el9.x86_64 103/138 2026-03-10T13:20:32.053 INFO:teuthology.orchestra.run.vm02.stdout: Installing : qatlib-25.08.0-2.el9.x86_64 104/138 2026-03-10T13:20:32.056 INFO:teuthology.orchestra.run.vm02.stdout: Installing : qatlib-service-25.08.0-2.el9.x86_64 105/138 2026-03-10T13:20:32.078 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 105/138 2026-03-10T13:20:32.411 INFO:teuthology.orchestra.run.vm02.stdout: Installing : qatzip-libs-1.3.1-1.el9.x86_64 106/138 2026-03-10T13:20:32.418 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 107/138 2026-03-10T13:20:32.472 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 107/138 2026-03-10T13:20:32.472 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /usr/lib/systemd/system/ceph.target. 2026-03-10T13:20:32.472 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /usr/lib/systemd/system/ceph-crash.service. 2026-03-10T13:20:32.472 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:20:32.482 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 108/138 2026-03-10T13:20:39.681 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 108/138 2026-03-10T13:20:39.681 INFO:teuthology.orchestra.run.vm02.stdout:skipping the directory /sys 2026-03-10T13:20:39.682 INFO:teuthology.orchestra.run.vm02.stdout:skipping the directory /proc 2026-03-10T13:20:39.682 INFO:teuthology.orchestra.run.vm02.stdout:skipping the directory /mnt 2026-03-10T13:20:39.682 INFO:teuthology.orchestra.run.vm02.stdout:skipping the directory /var/tmp 2026-03-10T13:20:39.682 INFO:teuthology.orchestra.run.vm02.stdout:skipping the directory /home 2026-03-10T13:20:39.682 INFO:teuthology.orchestra.run.vm02.stdout:skipping the directory /root 2026-03-10T13:20:39.682 INFO:teuthology.orchestra.run.vm02.stdout:skipping the directory /tmp 2026-03-10T13:20:39.682 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:20:39.824 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 109/138 2026-03-10T13:20:39.852 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 109/138 2026-03-10T13:20:39.852 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T13:20:39.852 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-10T13:20:39.852 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-10T13:20:39.852 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-10T13:20:39.852 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:20:40.104 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 110/138 2026-03-10T13:20:40.127 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 110/138 2026-03-10T13:20:40.127 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T13:20:40.127 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-10T13:20:40.127 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-10T13:20:40.127 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-10T13:20:40.127 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:20:40.136 INFO:teuthology.orchestra.run.vm02.stdout: Installing : mailcap-2.1.49-5.el9.noarch 111/138 2026-03-10T13:20:40.140 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libconfig-1.7.2-9.el9.x86_64 112/138 2026-03-10T13:20:40.159 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-10T13:20:40.159 INFO:teuthology.orchestra.run.vm02.stdout:Creating group 'qat' with GID 994. 2026-03-10T13:20:40.159 INFO:teuthology.orchestra.run.vm02.stdout:Creating group 'libstoragemgmt' with GID 993. 2026-03-10T13:20:40.159 INFO:teuthology.orchestra.run.vm02.stdout:Creating user 'libstoragemgmt' (daemon account for libstoragemgmt) with UID 993 and GID 993. 2026-03-10T13:20:40.159 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:20:40.172 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-10T13:20:40.204 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-10T13:20:40.204 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/libstoragemgmt.service → /usr/lib/systemd/system/libstoragemgmt.service. 2026-03-10T13:20:40.204 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:20:40.258 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 114/138 2026-03-10T13:20:40.350 INFO:teuthology.orchestra.run.vm02.stdout: Installing : cryptsetup-2.8.1-3.el9.x86_64 115/138 2026-03-10T13:20:40.356 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 116/138 2026-03-10T13:20:40.376 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 116/138 2026-03-10T13:20:40.376 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T13:20:40.376 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-10T13:20:40.376 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:20:41.273 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 117/138 2026-03-10T13:20:41.298 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 117/138 2026-03-10T13:20:41.298 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T13:20:41.298 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-10T13:20:41.298 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-10T13:20:41.298 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-10T13:20:41.298 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:20:41.365 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 118/138 2026-03-10T13:20:41.369 INFO:teuthology.orchestra.run.vm02.stdout: Installing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 118/138 2026-03-10T13:20:41.376 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 119/138 2026-03-10T13:20:41.402 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 120/138 2026-03-10T13:20:41.405 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 121/138 2026-03-10T13:20:42.016 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 121/138 2026-03-10T13:20:42.023 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 122/138 2026-03-10T13:20:42.602 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 122/138 2026-03-10T13:20:42.605 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 123/138 2026-03-10T13:20:42.671 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 123/138 2026-03-10T13:20:42.730 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 124/138 2026-03-10T13:20:42.733 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 125/138 2026-03-10T13:20:42.757 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 125/138 2026-03-10T13:20:42.757 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T13:20:42.758 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-10T13:20:42.758 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-10T13:20:42.758 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-10T13:20:42.758 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:20:42.773 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 126/138 2026-03-10T13:20:42.788 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 126/138 2026-03-10T13:20:43.347 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 127/138 2026-03-10T13:20:43.351 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 128/138 2026-03-10T13:20:43.375 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 128/138 2026-03-10T13:20:43.375 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T13:20:43.375 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-10T13:20:43.375 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-10T13:20:43.375 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-10T13:20:43.375 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:20:43.387 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 129/138 2026-03-10T13:20:43.415 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 129/138 2026-03-10T13:20:43.415 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T13:20:43.415 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-10T13:20:43.415 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:20:43.583 INFO:teuthology.orchestra.run.vm02.stdout: Installing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 130/138 2026-03-10T13:20:43.609 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 130/138 2026-03-10T13:20:43.609 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T13:20:43.609 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-10T13:20:43.609 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-10T13:20:43.609 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-10T13:20:43.609 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:20:46.376 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 131/138 2026-03-10T13:20:46.390 INFO:teuthology.orchestra.run.vm02.stdout: Installing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 132/138 2026-03-10T13:20:46.396 INFO:teuthology.orchestra.run.vm02.stdout: Installing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 133/138 2026-03-10T13:20:46.453 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 134/138 2026-03-10T13:20:46.463 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 135/138 2026-03-10T13:20:46.467 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-jmespath-1.0.1-1.el9.noarch 136/138 2026-03-10T13:20:46.467 INFO:teuthology.orchestra.run.vm02.stdout: Cleanup : librbd1-2:16.2.4-5.el9.x86_64 137/138 2026-03-10T13:20:46.485 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: librbd1-2:16.2.4-5.el9.x86_64 137/138 2026-03-10T13:20:46.485 INFO:teuthology.orchestra.run.vm02.stdout: Cleanup : librados2-2:16.2.4-5.el9.x86_64 138/138 2026-03-10T13:20:48.027 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: librados2-2:16.2.4-5.el9.x86_64 138/138 2026-03-10T13:20:48.027 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/138 2026-03-10T13:20:48.027 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/138 2026-03-10T13:20:48.027 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/138 2026-03-10T13:20:48.027 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-10T13:20:48.027 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/138 2026-03-10T13:20:48.027 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 6/138 2026-03-10T13:20:48.027 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 7/138 2026-03-10T13:20:48.027 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/138 2026-03-10T13:20:48.027 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 9/138 2026-03-10T13:20:48.028 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 10/138 2026-03-10T13:20:48.028 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-10T13:20:48.028 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 12/138 2026-03-10T13:20:48.028 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 13/138 2026-03-10T13:20:48.028 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 14/138 2026-03-10T13:20:48.028 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 15/138 2026-03-10T13:20:48.028 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 16/138 2026-03-10T13:20:48.028 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 17/138 2026-03-10T13:20:48.028 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 18/138 2026-03-10T13:20:48.028 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 19/138 2026-03-10T13:20:48.028 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 20/138 2026-03-10T13:20:48.028 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 21/138 2026-03-10T13:20:48.028 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 22/138 2026-03-10T13:20:48.028 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 23/138 2026-03-10T13:20:48.028 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 24/138 2026-03-10T13:20:48.028 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 25/138 2026-03-10T13:20:48.028 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 26/138 2026-03-10T13:20:48.028 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 27/138 2026-03-10T13:20:48.028 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 28/138 2026-03-10T13:20:48.028 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 29/138 2026-03-10T13:20:48.028 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 30/138 2026-03-10T13:20:48.029 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 31/138 2026-03-10T13:20:48.029 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 32/138 2026-03-10T13:20:48.029 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 33/138 2026-03-10T13:20:48.029 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 34/138 2026-03-10T13:20:48.029 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 35/138 2026-03-10T13:20:48.029 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 36/138 2026-03-10T13:20:48.029 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 37/138 2026-03-10T13:20:48.029 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 38/138 2026-03-10T13:20:48.029 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 39/138 2026-03-10T13:20:48.029 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 40/138 2026-03-10T13:20:48.029 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 41/138 2026-03-10T13:20:48.029 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 42/138 2026-03-10T13:20:48.029 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 43/138 2026-03-10T13:20:48.029 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/138 2026-03-10T13:20:48.029 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 45/138 2026-03-10T13:20:48.029 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-ply-3.11-14.el9.noarch 46/138 2026-03-10T13:20:48.029 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 47/138 2026-03-10T13:20:48.029 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 48/138 2026-03-10T13:20:48.029 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 49/138 2026-03-10T13:20:48.029 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : unzip-6.0-59.el9.x86_64 50/138 2026-03-10T13:20:48.029 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : zip-3.0-35.el9.x86_64 51/138 2026-03-10T13:20:48.029 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 52/138 2026-03-10T13:20:48.029 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 53/138 2026-03-10T13:20:48.029 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 54/138 2026-03-10T13:20:48.030 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 55/138 2026-03-10T13:20:48.030 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 56/138 2026-03-10T13:20:48.030 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 57/138 2026-03-10T13:20:48.030 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 58/138 2026-03-10T13:20:48.030 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 59/138 2026-03-10T13:20:48.030 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 60/138 2026-03-10T13:20:48.030 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 61/138 2026-03-10T13:20:48.030 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 62/138 2026-03-10T13:20:48.030 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : lua-5.4.4-4.el9.x86_64 63/138 2026-03-10T13:20:48.030 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 64/138 2026-03-10T13:20:48.030 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 65/138 2026-03-10T13:20:48.030 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 66/138 2026-03-10T13:20:48.030 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 67/138 2026-03-10T13:20:48.030 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 68/138 2026-03-10T13:20:48.030 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 69/138 2026-03-10T13:20:48.030 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jmespath-1.0.1-1.el9.noarch 70/138 2026-03-10T13:20:48.030 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 71/138 2026-03-10T13:20:48.030 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 72/138 2026-03-10T13:20:48.030 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 73/138 2026-03-10T13:20:48.030 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 74/138 2026-03-10T13:20:48.030 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 75/138 2026-03-10T13:20:48.030 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 76/138 2026-03-10T13:20:48.031 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 77/138 2026-03-10T13:20:48.031 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 78/138 2026-03-10T13:20:48.031 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 79/138 2026-03-10T13:20:48.031 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 80/138 2026-03-10T13:20:48.031 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 81/138 2026-03-10T13:20:48.031 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 82/138 2026-03-10T13:20:48.031 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 83/138 2026-03-10T13:20:48.031 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 84/138 2026-03-10T13:20:48.031 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 85/138 2026-03-10T13:20:48.031 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 86/138 2026-03-10T13:20:48.031 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 87/138 2026-03-10T13:20:48.031 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 88/138 2026-03-10T13:20:48.031 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 89/138 2026-03-10T13:20:48.031 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 90/138 2026-03-10T13:20:48.031 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 91/138 2026-03-10T13:20:48.031 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 92/138 2026-03-10T13:20:48.031 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 93/138 2026-03-10T13:20:48.031 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 94/138 2026-03-10T13:20:48.031 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 95/138 2026-03-10T13:20:48.031 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 96/138 2026-03-10T13:20:48.031 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 97/138 2026-03-10T13:20:48.031 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 98/138 2026-03-10T13:20:48.032 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 99/138 2026-03-10T13:20:48.032 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 100/138 2026-03-10T13:20:48.032 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 101/138 2026-03-10T13:20:48.032 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 102/138 2026-03-10T13:20:48.032 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 103/138 2026-03-10T13:20:48.032 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 104/138 2026-03-10T13:20:48.032 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 105/138 2026-03-10T13:20:48.032 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 106/138 2026-03-10T13:20:48.032 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 107/138 2026-03-10T13:20:48.032 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 108/138 2026-03-10T13:20:48.032 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 109/138 2026-03-10T13:20:48.032 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 110/138 2026-03-10T13:20:48.032 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 111/138 2026-03-10T13:20:48.032 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 112/138 2026-03-10T13:20:48.032 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 113/138 2026-03-10T13:20:48.032 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 114/138 2026-03-10T13:20:48.032 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 115/138 2026-03-10T13:20:48.032 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 116/138 2026-03-10T13:20:48.032 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 117/138 2026-03-10T13:20:48.033 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 118/138 2026-03-10T13:20:48.033 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 119/138 2026-03-10T13:20:48.033 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 120/138 2026-03-10T13:20:48.033 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 121/138 2026-03-10T13:20:48.033 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 122/138 2026-03-10T13:20:48.033 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 123/138 2026-03-10T13:20:48.033 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 124/138 2026-03-10T13:20:48.033 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 125/138 2026-03-10T13:20:48.033 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 126/138 2026-03-10T13:20:48.033 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 127/138 2026-03-10T13:20:48.033 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 128/138 2026-03-10T13:20:48.033 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 129/138 2026-03-10T13:20:48.033 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 130/138 2026-03-10T13:20:48.033 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-xmltodict-0.12.0-15.el9.noarch 131/138 2026-03-10T13:20:48.033 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 132/138 2026-03-10T13:20:48.033 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : re2-1:20211101-20.el9.x86_64 133/138 2026-03-10T13:20:48.033 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 134/138 2026-03-10T13:20:48.033 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 135/138 2026-03-10T13:20:48.033 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librados2-2:16.2.4-5.el9.x86_64 136/138 2026-03-10T13:20:48.033 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 137/138 2026-03-10T13:20:48.145 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librbd1-2:16.2.4-5.el9.x86_64 138/138 2026-03-10T13:20:48.145 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:20:48.145 INFO:teuthology.orchestra.run.vm02.stdout:Upgraded: 2026-03-10T13:20:48.145 INFO:teuthology.orchestra.run.vm02.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:20:48.145 INFO:teuthology.orchestra.run.vm02.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:20:48.145 INFO:teuthology.orchestra.run.vm02.stdout:Installed: 2026-03-10T13:20:48.145 INFO:teuthology.orchestra.run.vm02.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-10T13:20:48.145 INFO:teuthology.orchestra.run.vm02.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-10T13:20:48.145 INFO:teuthology.orchestra.run.vm02.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:20:48.145 INFO:teuthology.orchestra.run.vm02.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:20:48.145 INFO:teuthology.orchestra.run.vm02.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:20:48.145 INFO:teuthology.orchestra.run.vm02.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:20:48.145 INFO:teuthology.orchestra.run.vm02.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:20:48.145 INFO:teuthology.orchestra.run.vm02.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:20:48.145 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:20:48.145 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:20:48.145 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:20:48.145 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:20:48.145 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:20:48.145 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:20:48.145 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:20:48.145 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:20:48.145 INFO:teuthology.orchestra.run.vm02.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:20:48.145 INFO:teuthology.orchestra.run.vm02.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:20:48.145 INFO:teuthology.orchestra.run.vm02.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:20:48.145 INFO:teuthology.orchestra.run.vm02.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:20:48.145 INFO:teuthology.orchestra.run.vm02.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:20:48.145 INFO:teuthology.orchestra.run.vm02.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:20:48.145 INFO:teuthology.orchestra.run.vm02.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:20:48.145 INFO:teuthology.orchestra.run.vm02.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-10T13:20:48.145 INFO:teuthology.orchestra.run.vm02.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-10T13:20:48.145 INFO:teuthology.orchestra.run.vm02.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-10T13:20:48.145 INFO:teuthology.orchestra.run.vm02.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-10T13:20:48.145 INFO:teuthology.orchestra.run.vm02.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-10T13:20:48.145 INFO:teuthology.orchestra.run.vm02.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-10T13:20:48.145 INFO:teuthology.orchestra.run.vm02.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: libxslt-1.1.34-12.el9.x86_64 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: lua-5.4.4-4.el9.x86_64 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: mailcap-2.1.49-5.el9.noarch 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: python3-jmespath-1.0.1-1.el9.noarch 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T13:20:48.146 INFO:teuthology.orchestra.run.vm02.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-10T13:20:48.147 INFO:teuthology.orchestra.run.vm02.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-10T13:20:48.147 INFO:teuthology.orchestra.run.vm02.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-10T13:20:48.147 INFO:teuthology.orchestra.run.vm02.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-10T13:20:48.147 INFO:teuthology.orchestra.run.vm02.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-10T13:20:48.147 INFO:teuthology.orchestra.run.vm02.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-10T13:20:48.147 INFO:teuthology.orchestra.run.vm02.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-10T13:20:48.147 INFO:teuthology.orchestra.run.vm02.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-10T13:20:48.147 INFO:teuthology.orchestra.run.vm02.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-10T13:20:48.147 INFO:teuthology.orchestra.run.vm02.stdout: python3-ply-3.11-14.el9.noarch 2026-03-10T13:20:48.147 INFO:teuthology.orchestra.run.vm02.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-10T13:20:48.147 INFO:teuthology.orchestra.run.vm02.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-10T13:20:48.147 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-10T13:20:48.147 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-10T13:20:48.147 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-10T13:20:48.147 INFO:teuthology.orchestra.run.vm02.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-10T13:20:48.147 INFO:teuthology.orchestra.run.vm02.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:20:48.147 INFO:teuthology.orchestra.run.vm02.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:20:48.147 INFO:teuthology.orchestra.run.vm02.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-10T13:20:48.147 INFO:teuthology.orchestra.run.vm02.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-10T13:20:48.147 INFO:teuthology.orchestra.run.vm02.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-10T13:20:48.147 INFO:teuthology.orchestra.run.vm02.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:20:48.147 INFO:teuthology.orchestra.run.vm02.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-10T13:20:48.147 INFO:teuthology.orchestra.run.vm02.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-10T13:20:48.147 INFO:teuthology.orchestra.run.vm02.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-10T13:20:48.147 INFO:teuthology.orchestra.run.vm02.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-10T13:20:48.147 INFO:teuthology.orchestra.run.vm02.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-10T13:20:48.147 INFO:teuthology.orchestra.run.vm02.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-10T13:20:48.147 INFO:teuthology.orchestra.run.vm02.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-10T13:20:48.147 INFO:teuthology.orchestra.run.vm02.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-10T13:20:48.147 INFO:teuthology.orchestra.run.vm02.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-10T13:20:48.147 INFO:teuthology.orchestra.run.vm02.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-10T13:20:48.147 INFO:teuthology.orchestra.run.vm02.stdout: python3-xmltodict-0.12.0-15.el9.noarch 2026-03-10T13:20:48.147 INFO:teuthology.orchestra.run.vm02.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-10T13:20:48.147 INFO:teuthology.orchestra.run.vm02.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-10T13:20:48.147 INFO:teuthology.orchestra.run.vm02.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-10T13:20:48.147 INFO:teuthology.orchestra.run.vm02.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-10T13:20:48.147 INFO:teuthology.orchestra.run.vm02.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:20:48.147 INFO:teuthology.orchestra.run.vm02.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:20:48.147 INFO:teuthology.orchestra.run.vm02.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:20:48.147 INFO:teuthology.orchestra.run.vm02.stdout: re2-1:20211101-20.el9.x86_64 2026-03-10T13:20:48.147 INFO:teuthology.orchestra.run.vm02.stdout: socat-1.7.4.1-8.el9.x86_64 2026-03-10T13:20:48.147 INFO:teuthology.orchestra.run.vm02.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-10T13:20:48.147 INFO:teuthology.orchestra.run.vm02.stdout: unzip-6.0-59.el9.x86_64 2026-03-10T13:20:48.147 INFO:teuthology.orchestra.run.vm02.stdout: xmlstarlet-1.6.1-20.el9.x86_64 2026-03-10T13:20:48.147 INFO:teuthology.orchestra.run.vm02.stdout: zip-3.0-35.el9.x86_64 2026-03-10T13:20:48.147 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:20:48.147 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T13:20:48.260 DEBUG:teuthology.parallel:result is None 2026-03-10T13:20:48.260 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T13:20:48.860 DEBUG:teuthology.orchestra.run.vm02:> rpm -q ceph --qf '%{VERSION}-%{RELEASE}' 2026-03-10T13:20:48.881 INFO:teuthology.orchestra.run.vm02.stdout:19.2.3-678.ge911bdeb.el9 2026-03-10T13:20:48.881 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678.ge911bdeb.el9 2026-03-10T13:20:48.881 INFO:teuthology.task.install:The correct ceph version 19.2.3-678.ge911bdeb is installed. 2026-03-10T13:20:48.882 INFO:teuthology.task.install.util:Shipping valgrind.supp... 2026-03-10T13:20:48.883 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T13:20:48.883 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-10T13:20:48.952 INFO:teuthology.task.install.util:Shipping 'daemon-helper'... 2026-03-10T13:20:48.953 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T13:20:48.953 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/usr/bin/daemon-helper 2026-03-10T13:20:49.018 DEBUG:teuthology.orchestra.run.vm02:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-10T13:20:49.084 INFO:teuthology.task.install.util:Shipping 'adjust-ulimits'... 2026-03-10T13:20:49.084 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T13:20:49.084 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-10T13:20:49.150 DEBUG:teuthology.orchestra.run.vm02:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-10T13:20:49.219 INFO:teuthology.task.install.util:Shipping 'stdin-killer'... 2026-03-10T13:20:49.220 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T13:20:49.220 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/usr/bin/stdin-killer 2026-03-10T13:20:49.288 DEBUG:teuthology.orchestra.run.vm02:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-10T13:20:49.350 INFO:teuthology.run_tasks:Running task cephadm... 2026-03-10T13:20:49.400 INFO:tasks.cephadm:Config: {'conf': {'global': {'mon election default strategy': 1}, 'mgr': {'debug mgr': 20, 'debug ms': 1}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20}, 'osd': {'debug ms': 1, 'debug osd': 20, 'osd mclock iops capacity threshold hdd': 49000}}, 'flavor': 'default', 'log-ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', 'CEPHADM_REFRESH_FAILED'], 'log-only-match': ['CEPHADM_'], 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'} 2026-03-10T13:20:49.400 INFO:tasks.cephadm:Cluster image is quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T13:20:49.400 INFO:tasks.cephadm:Cluster fsid is f4876d10-1c83-11f1-ae9f-3f8bea697626 2026-03-10T13:20:49.400 INFO:tasks.cephadm:Choosing monitor IPs and ports... 2026-03-10T13:20:49.400 INFO:tasks.cephadm:Monitor IPs: {'mon.a': '192.168.123.102'} 2026-03-10T13:20:49.400 INFO:tasks.cephadm:First mon is mon.a on vm02 2026-03-10T13:20:49.400 INFO:tasks.cephadm:First mgr is a 2026-03-10T13:20:49.400 INFO:tasks.cephadm:Normalizing hostnames... 2026-03-10T13:20:49.400 DEBUG:teuthology.orchestra.run.vm02:> sudo hostname $(hostname -s) 2026-03-10T13:20:49.422 INFO:tasks.cephadm:Downloading "compiled" cephadm from cachra 2026-03-10T13:20:49.422 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T13:20:50.098 INFO:tasks.cephadm:builder_project result: [{'url': 'https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/', 'chacra_url': 'https://3.chacra.ceph.com/repos/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/', 'ref': 'squid', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'distro': 'centos', 'distro_version': '9', 'distro_codename': None, 'modified': '2026-02-25 18:55:15.146628', 'status': 'ready', 'flavor': 'default', 'project': 'ceph', 'archs': ['source', 'x86_64'], 'extra': {'version': '19.2.3-678-ge911bdeb', 'package_manager_version': '19.2.3-678.ge911bdeb', 'build_url': 'https://jenkins.ceph.com/job/ceph-dev-pipeline/3275/', 'root_build_cause': '', 'node_name': '10.20.192.26+soko16', 'job_name': 'ceph-dev-pipeline'}}] 2026-03-10T13:20:50.663 INFO:tasks.util.chacra:got chacra host 3.chacra.ceph.com, ref squid, sha1 e911bdebe5c8faa3800735d1568fcdca65db60df from https://shaman.ceph.com/api/search/?project=ceph&distros=centos%2F9%2Fx86_64&flavor=default&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T13:20:50.664 INFO:tasks.cephadm:Discovered cachra url: https://3.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/x86_64/flavors/default/cephadm 2026-03-10T13:20:50.664 INFO:tasks.cephadm:Downloading cephadm from url: https://3.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/x86_64/flavors/default/cephadm 2026-03-10T13:20:50.665 DEBUG:teuthology.orchestra.run.vm02:> curl --silent -L https://3.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-10T13:20:52.046 INFO:teuthology.orchestra.run.vm02.stdout:-rw-r--r--. 1 ubuntu ubuntu 788355 Mar 10 13:20 /home/ubuntu/cephtest/cephadm 2026-03-10T13:20:52.047 DEBUG:teuthology.orchestra.run.vm02:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-10T13:20:52.072 INFO:tasks.cephadm:Pulling image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df on all hosts... 2026-03-10T13:20:52.072 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-10T13:20:52.283 INFO:teuthology.orchestra.run.vm02.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T13:22:17.020 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T13:22:17.020 INFO:teuthology.orchestra.run.vm02.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-10T13:22:17.020 INFO:teuthology.orchestra.run.vm02.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-10T13:22:17.020 INFO:teuthology.orchestra.run.vm02.stdout: "repo_digests": [ 2026-03-10T13:22:17.020 INFO:teuthology.orchestra.run.vm02.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-10T13:22:17.020 INFO:teuthology.orchestra.run.vm02.stdout: ] 2026-03-10T13:22:17.020 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T13:22:17.044 DEBUG:teuthology.orchestra.run.vm02:> sudo mkdir -p /etc/ceph 2026-03-10T13:22:17.078 DEBUG:teuthology.orchestra.run.vm02:> sudo chmod 777 /etc/ceph 2026-03-10T13:22:17.147 INFO:tasks.cephadm:Writing seed config... 2026-03-10T13:22:17.148 INFO:tasks.cephadm: override: [global] mon election default strategy = 1 2026-03-10T13:22:17.148 INFO:tasks.cephadm: override: [mgr] debug mgr = 20 2026-03-10T13:22:17.148 INFO:tasks.cephadm: override: [mgr] debug ms = 1 2026-03-10T13:22:17.148 INFO:tasks.cephadm: override: [mon] debug mon = 20 2026-03-10T13:22:17.148 INFO:tasks.cephadm: override: [mon] debug ms = 1 2026-03-10T13:22:17.148 INFO:tasks.cephadm: override: [mon] debug paxos = 20 2026-03-10T13:22:17.148 INFO:tasks.cephadm: override: [osd] debug ms = 1 2026-03-10T13:22:17.148 INFO:tasks.cephadm: override: [osd] debug osd = 20 2026-03-10T13:22:17.148 INFO:tasks.cephadm: override: [osd] osd mclock iops capacity threshold hdd = 49000 2026-03-10T13:22:17.148 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T13:22:17.148 DEBUG:teuthology.orchestra.run.vm02:> dd of=/home/ubuntu/cephtest/seed.ceph.conf 2026-03-10T13:22:17.203 DEBUG:tasks.cephadm:Final config: [global] # make logging friendly to teuthology log_to_file = true log_to_stderr = false log to journald = false mon cluster log to file = true mon cluster log file level = debug mon clock drift allowed = 1.000 # replicate across OSDs, not hosts osd crush chooseleaf type = 0 #osd pool default size = 2 osd pool default erasure code profile = plugin=jerasure technique=reed_sol_van k=2 m=1 crush-failure-domain=osd # enable some debugging auth debug = true ms die on old message = true ms die on bug = true debug asserts on shutdown = true # adjust warnings mon max pg per osd = 10000# >= luminous mon pg warn max object skew = 0 mon osd allow primary affinity = true mon osd allow pg remap = true mon warn on legacy crush tunables = false mon warn on crush straw calc version zero = false mon warn on no sortbitwise = false mon warn on osd down out interval zero = false mon warn on too few osds = false mon_warn_on_pool_pg_num_not_power_of_two = false # disable pg_autoscaler by default for new pools osd_pool_default_pg_autoscale_mode = off # tests delete pools mon allow pool delete = true fsid = f4876d10-1c83-11f1-ae9f-3f8bea697626 mon election default strategy = 1 [osd] osd scrub load threshold = 5.0 osd scrub max interval = 600 osd mclock profile = high_recovery_ops osd recover clone overlap = true osd recovery max chunk = 1048576 osd deep scrub update digest min age = 30 osd map max advance = 10 osd memory target autotune = true # debugging osd debug shutdown = true osd debug op order = true osd debug verify stray on activate = true osd debug pg log writeout = true osd debug verify cached snaps = true osd debug verify missing on start = true osd debug misdirected ops = true osd op queue = debug_random osd op queue cut off = debug_random osd shutdown pgref assert = true bdev debug aio = true osd sloppy crc = true debug ms = 1 debug osd = 20 osd mclock iops capacity threshold hdd = 49000 [mgr] mon reweight min pgs per osd = 4 mon reweight min bytes per osd = 10 mgr/telemetry/nag = false debug mgr = 20 debug ms = 1 [mon] mon data avail warn = 5 mon mgr mkfs grace = 240 mon reweight min pgs per osd = 4 mon osd reporter subtree level = osd mon osd prime pg temp = true mon reweight min bytes per osd = 10 # rotate auth tickets quickly to exercise renewal paths auth mon ticket ttl = 660# 11m auth service ticket ttl = 240# 4m # don't complain about global id reclaim mon_warn_on_insecure_global_id_reclaim = false mon_warn_on_insecure_global_id_reclaim_allowed = false debug mon = 20 debug ms = 1 debug paxos = 20 [client.rgw] rgw cache enabled = true rgw enable ops log = true rgw enable usage log = true 2026-03-10T13:22:17.203 DEBUG:teuthology.orchestra.run.vm02:mon.a> sudo journalctl -f -n 0 -u ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626@mon.a.service 2026-03-10T13:22:17.245 DEBUG:teuthology.orchestra.run.vm02:mgr.a> sudo journalctl -f -n 0 -u ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626@mgr.a.service 2026-03-10T13:22:17.287 INFO:tasks.cephadm:Bootstrapping... 2026-03-10T13:22:17.287 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df -v bootstrap --fsid f4876d10-1c83-11f1-ae9f-3f8bea697626 --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id a --orphan-initial-daemons --skip-monitoring-stack --mon-ip 192.168.123.102 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring 2026-03-10T13:22:17.436 INFO:teuthology.orchestra.run.vm02.stdout:-------------------------------------------------------------------------------- 2026-03-10T13:22:17.436 INFO:teuthology.orchestra.run.vm02.stdout:cephadm ['--image', 'quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df', '-v', 'bootstrap', '--fsid', 'f4876d10-1c83-11f1-ae9f-3f8bea697626', '--config', '/home/ubuntu/cephtest/seed.ceph.conf', '--output-config', '/etc/ceph/ceph.conf', '--output-keyring', '/etc/ceph/ceph.client.admin.keyring', '--output-pub-ssh-key', '/home/ubuntu/cephtest/ceph.pub', '--mon-id', 'a', '--mgr-id', 'a', '--orphan-initial-daemons', '--skip-monitoring-stack', '--mon-ip', '192.168.123.102', '--skip-admin-label'] 2026-03-10T13:22:17.437 INFO:teuthology.orchestra.run.vm02.stderr:Specifying an fsid for your cluster offers no advantages and may increase the likelihood of fsid conflicts. 2026-03-10T13:22:17.437 INFO:teuthology.orchestra.run.vm02.stdout:Verifying podman|docker is present... 2026-03-10T13:22:17.456 INFO:teuthology.orchestra.run.vm02.stdout:/bin/podman: stdout 5.8.0 2026-03-10T13:22:17.456 INFO:teuthology.orchestra.run.vm02.stdout:Verifying lvm2 is present... 2026-03-10T13:22:17.456 INFO:teuthology.orchestra.run.vm02.stdout:Verifying time synchronization is in place... 2026-03-10T13:22:17.463 INFO:teuthology.orchestra.run.vm02.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-10T13:22:17.463 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-10T13:22:17.469 INFO:teuthology.orchestra.run.vm02.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-10T13:22:17.469 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stdout inactive 2026-03-10T13:22:17.475 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stdout enabled 2026-03-10T13:22:17.481 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stdout active 2026-03-10T13:22:17.481 INFO:teuthology.orchestra.run.vm02.stdout:Unit chronyd.service is enabled and running 2026-03-10T13:22:17.481 INFO:teuthology.orchestra.run.vm02.stdout:Repeating the final host check... 2026-03-10T13:22:17.500 INFO:teuthology.orchestra.run.vm02.stdout:/bin/podman: stdout 5.8.0 2026-03-10T13:22:17.500 INFO:teuthology.orchestra.run.vm02.stdout:podman (/bin/podman) version 5.8.0 is present 2026-03-10T13:22:17.500 INFO:teuthology.orchestra.run.vm02.stdout:systemctl is present 2026-03-10T13:22:17.500 INFO:teuthology.orchestra.run.vm02.stdout:lvcreate is present 2026-03-10T13:22:17.506 INFO:teuthology.orchestra.run.vm02.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-10T13:22:17.506 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-10T13:22:17.512 INFO:teuthology.orchestra.run.vm02.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-10T13:22:17.513 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stdout inactive 2026-03-10T13:22:17.518 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stdout enabled 2026-03-10T13:22:17.524 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stdout active 2026-03-10T13:22:17.524 INFO:teuthology.orchestra.run.vm02.stdout:Unit chronyd.service is enabled and running 2026-03-10T13:22:17.524 INFO:teuthology.orchestra.run.vm02.stdout:Host looks OK 2026-03-10T13:22:17.524 INFO:teuthology.orchestra.run.vm02.stdout:Cluster fsid: f4876d10-1c83-11f1-ae9f-3f8bea697626 2026-03-10T13:22:17.524 INFO:teuthology.orchestra.run.vm02.stdout:Acquiring lock 140028314957856 on /run/cephadm/f4876d10-1c83-11f1-ae9f-3f8bea697626.lock 2026-03-10T13:22:17.524 INFO:teuthology.orchestra.run.vm02.stdout:Lock 140028314957856 acquired on /run/cephadm/f4876d10-1c83-11f1-ae9f-3f8bea697626.lock 2026-03-10T13:22:17.524 INFO:teuthology.orchestra.run.vm02.stdout:Verifying IP 192.168.123.102 port 3300 ... 2026-03-10T13:22:17.525 INFO:teuthology.orchestra.run.vm02.stdout:Verifying IP 192.168.123.102 port 6789 ... 2026-03-10T13:22:17.525 INFO:teuthology.orchestra.run.vm02.stdout:Base mon IP(s) is [192.168.123.102:3300, 192.168.123.102:6789], mon addrv is [v2:192.168.123.102:3300,v1:192.168.123.102:6789] 2026-03-10T13:22:17.528 INFO:teuthology.orchestra.run.vm02.stdout:/sbin/ip: stdout default via 192.168.123.1 dev eth0 proto dhcp src 192.168.123.102 metric 100 2026-03-10T13:22:17.528 INFO:teuthology.orchestra.run.vm02.stdout:/sbin/ip: stdout 192.168.123.0/24 dev eth0 proto kernel scope link src 192.168.123.102 metric 100 2026-03-10T13:22:17.531 INFO:teuthology.orchestra.run.vm02.stdout:/sbin/ip: stdout ::1 dev lo proto kernel metric 256 pref medium 2026-03-10T13:22:17.531 INFO:teuthology.orchestra.run.vm02.stdout:/sbin/ip: stdout fe80::/64 dev eth0 proto kernel metric 1024 pref medium 2026-03-10T13:22:17.534 INFO:teuthology.orchestra.run.vm02.stdout:/sbin/ip: stdout 1: lo: mtu 65536 state UNKNOWN qlen 1000 2026-03-10T13:22:17.534 INFO:teuthology.orchestra.run.vm02.stdout:/sbin/ip: stdout inet6 ::1/128 scope host 2026-03-10T13:22:17.534 INFO:teuthology.orchestra.run.vm02.stdout:/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-10T13:22:17.534 INFO:teuthology.orchestra.run.vm02.stdout:/sbin/ip: stdout 2: eth0: mtu 1500 state UP qlen 1000 2026-03-10T13:22:17.534 INFO:teuthology.orchestra.run.vm02.stdout:/sbin/ip: stdout inet6 fe80::5055:ff:fe00:2/64 scope link noprefixroute 2026-03-10T13:22:17.534 INFO:teuthology.orchestra.run.vm02.stdout:/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-10T13:22:17.534 INFO:teuthology.orchestra.run.vm02.stdout:Mon IP `192.168.123.102` is in CIDR network `192.168.123.0/24` 2026-03-10T13:22:17.535 INFO:teuthology.orchestra.run.vm02.stdout:Mon IP `192.168.123.102` is in CIDR network `192.168.123.0/24` 2026-03-10T13:22:17.535 INFO:teuthology.orchestra.run.vm02.stdout:Inferred mon public CIDR from local network configuration ['192.168.123.0/24', '192.168.123.0/24'] 2026-03-10T13:22:17.535 INFO:teuthology.orchestra.run.vm02.stdout:Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network 2026-03-10T13:22:17.535 INFO:teuthology.orchestra.run.vm02.stdout:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T13:22:18.914 INFO:teuthology.orchestra.run.vm02.stdout:/bin/podman: stdout 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c 2026-03-10T13:22:18.914 INFO:teuthology.orchestra.run.vm02.stdout:/bin/podman: stderr Trying to pull quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T13:22:18.914 INFO:teuthology.orchestra.run.vm02.stdout:/bin/podman: stderr Getting image source signatures 2026-03-10T13:22:18.914 INFO:teuthology.orchestra.run.vm02.stdout:/bin/podman: stderr Copying blob sha256:1752b8d01aa0dd33bbe0ab24e8316174c94fbdcd5d26252e2680bba0624747a7 2026-03-10T13:22:18.914 INFO:teuthology.orchestra.run.vm02.stdout:/bin/podman: stderr Copying blob sha256:8e380faede39ebd4286247457b408d979ab568aafd8389c42ec304b8cfba4e92 2026-03-10T13:22:18.914 INFO:teuthology.orchestra.run.vm02.stdout:/bin/podman: stderr Copying config sha256:654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c 2026-03-10T13:22:18.914 INFO:teuthology.orchestra.run.vm02.stdout:/bin/podman: stderr Writing manifest to image destination 2026-03-10T13:22:19.085 INFO:teuthology.orchestra.run.vm02.stdout:ceph: stdout ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-10T13:22:19.085 INFO:teuthology.orchestra.run.vm02.stdout:Ceph version: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-10T13:22:19.085 INFO:teuthology.orchestra.run.vm02.stdout:Extracting ceph user uid/gid from container image... 2026-03-10T13:22:19.169 INFO:teuthology.orchestra.run.vm02.stdout:stat: stdout 167 167 2026-03-10T13:22:19.169 INFO:teuthology.orchestra.run.vm02.stdout:Creating initial keys... 2026-03-10T13:22:19.282 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-authtool: stdout AQALG7BpW+H2DhAAHgGzuAzBGKoUu+/mRHdT3A== 2026-03-10T13:22:19.385 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-authtool: stdout AQALG7BpNsvDFBAAkoehjzhvEbz2GebO3nsUiw== 2026-03-10T13:22:19.495 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-authtool: stdout AQALG7Bpq1V5GxAAKHDv1EPXEShgemVKgP562g== 2026-03-10T13:22:19.496 INFO:teuthology.orchestra.run.vm02.stdout:Creating initial monmap... 2026-03-10T13:22:19.596 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-10T13:22:19.596 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/monmaptool: stdout setting min_mon_release = quincy 2026-03-10T13:22:19.596 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: set fsid to f4876d10-1c83-11f1-ae9f-3f8bea697626 2026-03-10T13:22:19.596 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-10T13:22:19.596 INFO:teuthology.orchestra.run.vm02.stdout:monmaptool for a [v2:192.168.123.102:3300,v1:192.168.123.102:6789] on /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-10T13:22:19.596 INFO:teuthology.orchestra.run.vm02.stdout:setting min_mon_release = quincy 2026-03-10T13:22:19.596 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/monmaptool: set fsid to f4876d10-1c83-11f1-ae9f-3f8bea697626 2026-03-10T13:22:19.596 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-10T13:22:19.596 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:22:19.596 INFO:teuthology.orchestra.run.vm02.stdout:Creating mon... 2026-03-10T13:22:19.718 INFO:teuthology.orchestra.run.vm02.stdout:create mon.a on 2026-03-10T13:22:19.877 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stderr Removed "/etc/systemd/system/multi-user.target.wants/ceph.target". 2026-03-10T13:22:20.009 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /etc/systemd/system/ceph.target. 2026-03-10T13:22:20.142 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626.target → /etc/systemd/system/ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626.target. 2026-03-10T13:22:20.143 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph.target.wants/ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626.target → /etc/systemd/system/ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626.target. 2026-03-10T13:22:20.303 INFO:teuthology.orchestra.run.vm02.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626@mon.a 2026-03-10T13:22:20.303 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stderr Failed to reset failed state of unit ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626@mon.a.service: Unit ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626@mon.a.service not loaded. 2026-03-10T13:22:20.453 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626.target.wants/ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626@mon.a.service → /etc/systemd/system/ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626@.service. 2026-03-10T13:22:20.648 INFO:teuthology.orchestra.run.vm02.stdout:firewalld does not appear to be present 2026-03-10T13:22:20.648 INFO:teuthology.orchestra.run.vm02.stdout:Not possible to enable service . firewalld.service is not available 2026-03-10T13:22:20.648 INFO:teuthology.orchestra.run.vm02.stdout:Waiting for mon to start... 2026-03-10T13:22:20.648 INFO:teuthology.orchestra.run.vm02.stdout:Waiting for mon... 2026-03-10T13:22:20.899 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout cluster: 2026-03-10T13:22:20.899 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout id: f4876d10-1c83-11f1-ae9f-3f8bea697626 2026-03-10T13:22:20.899 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout health: HEALTH_OK 2026-03-10T13:22:20.899 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-10T13:22:20.899 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout services: 2026-03-10T13:22:20.899 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mon: 1 daemons, quorum a (age 0.167911s) 2026-03-10T13:22:20.899 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mgr: no daemons active 2026-03-10T13:22:20.899 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout osd: 0 osds: 0 up, 0 in 2026-03-10T13:22:20.899 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-10T13:22:20.899 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout data: 2026-03-10T13:22:20.899 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout pools: 0 pools, 0 pgs 2026-03-10T13:22:20.899 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout objects: 0 objects, 0 B 2026-03-10T13:22:20.899 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout usage: 0 B used, 0 B / 0 B avail 2026-03-10T13:22:20.900 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout pgs: 2026-03-10T13:22:20.900 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-10T13:22:20.900 INFO:teuthology.orchestra.run.vm02.stdout:mon is available 2026-03-10T13:22:20.900 INFO:teuthology.orchestra.run.vm02.stdout:Assimilating anything we can from ceph.conf... 2026-03-10T13:22:21.117 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-10T13:22:21.118 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout [global] 2026-03-10T13:22:21.118 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout fsid = f4876d10-1c83-11f1-ae9f-3f8bea697626 2026-03-10T13:22:21.118 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-10T13:22:21.118 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.102:3300,v1:192.168.123.102:6789] 2026-03-10T13:22:21.118 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-10T13:22:21.118 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-10T13:22:21.118 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-10T13:22:21.118 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-10T13:22:21.118 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-10T13:22:21.118 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-10T13:22:21.118 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-10T13:22:21.118 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-10T13:22:21.118 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout [osd] 2026-03-10T13:22:21.118 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-10T13:22:21.118 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-10T13:22:21.118 INFO:teuthology.orchestra.run.vm02.stdout:Generating new minimal ceph.conf... 2026-03-10T13:22:21.331 INFO:teuthology.orchestra.run.vm02.stdout:Restarting the monitor... 2026-03-10T13:22:21.417 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 systemd[1]: Stopping Ceph mon.a for f4876d10-1c83-11f1-ae9f-3f8bea697626... 2026-03-10T13:22:21.714 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mon-a[52244]: 2026-03-10T13:22:21.415+0000 7fbae5604640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T13:22:21.714 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mon-a[52244]: 2026-03-10T13:22:21.415+0000 7fbae5604640 -1 mon.a@0(leader) e1 *** Got Signal Terminated *** 2026-03-10T13:22:21.714 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 podman[52450]: 2026-03-10 13:22:21.480925763 +0000 UTC m=+0.077836589 container died 5ae8fa47ed1ed3898e480dd83ade4ceafb250a88d043814ab77b985cdc9df71d (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mon-a, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) 2026-03-10T13:22:21.715 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 podman[52450]: 2026-03-10 13:22:21.495051044 +0000 UTC m=+0.091961870 container remove 5ae8fa47ed1ed3898e480dd83ade4ceafb250a88d043814ab77b985cdc9df71d (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mon-a, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-10T13:22:21.715 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 bash[52450]: ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mon-a 2026-03-10T13:22:21.715 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 systemd[1]: ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626@mon.a.service: Deactivated successfully. 2026-03-10T13:22:21.715 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 systemd[1]: Stopped Ceph mon.a for f4876d10-1c83-11f1-ae9f-3f8bea697626. 2026-03-10T13:22:21.715 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 systemd[1]: Starting Ceph mon.a for f4876d10-1c83-11f1-ae9f-3f8bea697626... 2026-03-10T13:22:21.715 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 podman[52520]: 2026-03-10 13:22:21.665075673 +0000 UTC m=+0.021005110 container create 6dbf608920d670372c597edba97d9884f9555972b6725f88da7aca2c1ed6d03e (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mon-a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-10T13:22:21.728 INFO:teuthology.orchestra.run.vm02.stdout:Setting public_network to 192.168.123.0/24 in mon config section 2026-03-10T13:22:21.960 INFO:teuthology.orchestra.run.vm02.stdout:Wrote config to /etc/ceph/ceph.conf 2026-03-10T13:22:21.961 INFO:teuthology.orchestra.run.vm02.stdout:Wrote keyring to /etc/ceph/ceph.client.admin.keyring 2026-03-10T13:22:21.961 INFO:teuthology.orchestra.run.vm02.stdout:Creating mgr... 2026-03-10T13:22:21.961 INFO:teuthology.orchestra.run.vm02.stdout:Verifying port 0.0.0.0:9283 ... 2026-03-10T13:22:21.962 INFO:teuthology.orchestra.run.vm02.stdout:Verifying port 0.0.0.0:8765 ... 2026-03-10T13:22:21.980 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 podman[52520]: 2026-03-10 13:22:21.716642815 +0000 UTC m=+0.072572242 container init 6dbf608920d670372c597edba97d9884f9555972b6725f88da7aca2c1ed6d03e (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mon-a, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20260223) 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 podman[52520]: 2026-03-10 13:22:21.721352475 +0000 UTC m=+0.077281901 container start 6dbf608920d670372c597edba97d9884f9555972b6725f88da7aca2c1ed6d03e (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mon-a, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3) 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 bash[52520]: 6dbf608920d670372c597edba97d9884f9555972b6725f88da7aca2c1ed6d03e 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 podman[52520]: 2026-03-10 13:22:21.657202025 +0000 UTC m=+0.013131463 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 systemd[1]: Started Ceph mon.a for f4876d10-1c83-11f1-ae9f-3f8bea697626. 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: set uid:gid to 167:167 (ceph:ceph) 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 2 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: pidfile_write: ignore empty --pid-file 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: load: jerasure load: lrc 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: RocksDB version: 7.9.2 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Git sha 0 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Compile date 2026-02-25 18:11:04 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: DB SUMMARY 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: DB Session ID: 4B4OX8BIK06FTU8E54H4 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: CURRENT file: CURRENT 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: IDENTITY file: IDENTITY 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: MANIFEST file: MANIFEST-000010 size: 179 Bytes 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 1, files: 000008.sst 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 000009.log size: 75535 ; 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.error_if_exists: 0 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.create_if_missing: 0 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.paranoid_checks: 1 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.flush_verify_memtable_count: 1 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.env: 0x5566101b5dc0 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.fs: PosixFileSystem 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.info_log: 0x556611b14700 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.max_file_opening_threads: 16 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.statistics: (nil) 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.use_fsync: 0 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.max_log_file_size: 0 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.keep_log_file_num: 1000 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.recycle_log_file_num: 0 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.allow_fallocate: 1 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.allow_mmap_reads: 0 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.allow_mmap_writes: 0 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.use_direct_reads: 0 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.create_missing_column_families: 0 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.db_log_dir: 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.wal_dir: 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.advise_random_on_open: 1 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.db_write_buffer_size: 0 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.write_buffer_manager: 0x556611b19900 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.rate_limiter: (nil) 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T13:22:21.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.wal_recovery_mode: 2 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.enable_thread_tracking: 0 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.enable_pipelined_write: 0 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.unordered_write: 0 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.row_cache: None 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.wal_filter: None 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.allow_ingest_behind: 0 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.two_write_queues: 0 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.manual_wal_flush: 0 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.wal_compression: 0 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.atomic_flush: 0 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.log_readahead_size: 0 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.best_efforts_recovery: 0 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.allow_data_in_errors: 0 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.db_host_id: __hostname__ 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.enforce_single_del_contracts: true 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.max_background_jobs: 2 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.max_background_compactions: -1 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.max_subcompactions: 1 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.max_total_wal_size: 0 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.max_open_files: -1 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.bytes_per_sync: 0 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.compaction_readahead_size: 0 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.max_background_flushes: -1 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Compression algorithms supported: 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: kZSTD supported: 0 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: kXpressCompression supported: 0 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: kBZip2Compression supported: 0 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: kLZ4Compression supported: 1 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: kZlibCompression supported: 1 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: kLZ4HCCompression supported: 1 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: kSnappyCompression supported: 1 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: DMutex implementation: pthread_mutex_t 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.merge_operator: 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.compaction_filter: None 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.compaction_filter_factory: None 2026-03-10T13:22:21.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.sst_partitioner_factory: None 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556611b14640) 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout: cache_index_and_filter_blocks: 1 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout: pin_top_level_index_and_filter: 1 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout: index_type: 0 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout: data_block_index_type: 0 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout: index_shortening: 1 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout: data_block_hash_table_util_ratio: 0.750000 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout: checksum: 4 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout: no_block_cache: 0 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout: block_cache: 0x556611b39350 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout: block_cache_name: BinnedLRUCache 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout: block_cache_options: 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout: capacity : 536870912 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout: num_shard_bits : 4 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout: strict_capacity_limit : 0 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout: high_pri_pool_ratio: 0.000 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout: block_cache_compressed: (nil) 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout: persistent_cache: (nil) 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout: block_size: 4096 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout: block_size_deviation: 10 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout: block_restart_interval: 16 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout: index_block_restart_interval: 1 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout: metadata_block_size: 4096 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout: partition_filters: 0 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout: use_delta_encoding: 1 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout: filter_policy: bloomfilter 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout: whole_key_filtering: 1 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout: verify_compression: 0 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout: read_amp_bytes_per_bit: 0 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout: format_version: 5 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout: enable_index_compression: 1 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout: block_align: 0 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout: max_auto_readahead_size: 262144 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout: prepopulate_block_cache: 0 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout: initial_auto_readahead_size: 8192 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout: num_file_reads_for_auto_readahead: 2 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.write_buffer_size: 33554432 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.max_write_buffer_number: 2 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.compression: NoCompression 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.bottommost_compression: Disabled 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.prefix_extractor: nullptr 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.num_levels: 7 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.compression_opts.level: 32767 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.compression_opts.strategy: 0 2026-03-10T13:22:21.983 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.compression_opts.enabled: false 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.target_file_size_base: 67108864 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.arena_block_size: 1048576 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.disable_auto_compactions: 0 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.inplace_update_support: 0 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.bloom_locality: 0 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.max_successive_merges: 0 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.paranoid_file_checks: 0 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.force_consistency_checks: 1 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.report_bg_io_stats: 0 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.ttl: 2592000 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.enable_blob_files: false 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.min_blob_size: 0 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.blob_file_size: 268435456 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-10T13:22:21.984 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.blob_file_starting_level: 0 2026-03-10T13:22:21.985 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-10T13:22:21.985 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5 2026-03-10T13:22:21.985 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 2026-03-10T13:22:21.985 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 81aa9300-46f2-4475-b33b-e280653f4f76 2026-03-10T13:22:21.985 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773148941750843, "job": 1, "event": "recovery_started", "wal_files": [9]} 2026-03-10T13:22:21.985 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2 2026-03-10T13:22:21.985 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773148941752820, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 72616, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 225, "table_properties": {"data_size": 70895, "index_size": 174, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 517, "raw_key_size": 9705, "raw_average_key_size": 49, "raw_value_size": 65374, "raw_average_value_size": 333, "num_data_blocks": 8, "num_entries": 196, "num_filter_entries": 196, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773148941, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "81aa9300-46f2-4475-b33b-e280653f4f76", "db_session_id": "4B4OX8BIK06FTU8E54H4", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}} 2026-03-10T13:22:21.985 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773148941752900, "job": 1, "event": "recovery_finished"} 2026-03-10T13:22:21.985 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: [db/version_set.cc:5047] Creating manifest 15 2026-03-10T13:22:21.985 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-a/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-10T13:22:21.985 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x556611b3ae00 2026-03-10T13:22:21.985 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: rocksdb: DB pointer 0x556611c50000 2026-03-10T13:22:21.985 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: starting mon.a rank 0 at public addrs [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] at bind addrs [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon_data /var/lib/ceph/mon/ceph-a fsid f4876d10-1c83-11f1-ae9f-3f8bea697626 2026-03-10T13:22:21.985 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: mon.a@-1(???) e1 preinit fsid f4876d10-1c83-11f1-ae9f-3f8bea697626 2026-03-10T13:22:21.985 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: mon.a@-1(???).mds e0 Unable to load 'last_metadata' 2026-03-10T13:22:21.985 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: mon.a@-1(???).mds e0 Unable to load 'last_metadata' 2026-03-10T13:22:21.985 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: mon.a@-1(???).mds e1 new map 2026-03-10T13:22:21.985 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: mon.a@-1(???).mds e1 print_map 2026-03-10T13:22:21.985 INFO:journalctl@ceph.mon.a.vm02.stdout: e1 2026-03-10T13:22:21.985 INFO:journalctl@ceph.mon.a.vm02.stdout: btime 2026-03-10T13:22:20:678763+0000 2026-03-10T13:22:21.985 INFO:journalctl@ceph.mon.a.vm02.stdout: enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T13:22:21.985 INFO:journalctl@ceph.mon.a.vm02.stdout: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-10T13:22:21.985 INFO:journalctl@ceph.mon.a.vm02.stdout: legacy client fscid: -1 2026-03-10T13:22:21.985 INFO:journalctl@ceph.mon.a.vm02.stdout: 2026-03-10T13:22:21.985 INFO:journalctl@ceph.mon.a.vm02.stdout: No filesystems configured 2026-03-10T13:22:21.985 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: mon.a@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-10T13:22:21.985 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T13:22:21.985 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T13:22:21.985 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T13:22:21.985 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: mon.a@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3 2026-03-10T13:22:21.985 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: mon.a@-1(???).mgr e0 loading version 1 2026-03-10T13:22:21.985 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: mon.a@-1(???).mgr e1 active server: (0) 2026-03-10T13:22:21.985 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: mon.a@-1(???).mgr e1 mkfs or daemon transitioned to available, loading commands 2026-03-10T13:22:21.985 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T13:22:21.985 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: monmap epoch 1 2026-03-10T13:22:21.985 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: fsid f4876d10-1c83-11f1-ae9f-3f8bea697626 2026-03-10T13:22:21.985 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: last_changed 2026-03-10T13:22:19.575353+0000 2026-03-10T13:22:21.985 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: created 2026-03-10T13:22:19.575353+0000 2026-03-10T13:22:21.985 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: min_mon_release 19 (squid) 2026-03-10T13:22:21.985 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: election_strategy: 1 2026-03-10T13:22:21.985 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: 0: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.a 2026-03-10T13:22:21.985 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: fsmap 2026-03-10T13:22:21.985 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: osdmap e1: 0 total, 0 up, 0 in 2026-03-10T13:22:21.985 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:21 vm02 ceph-mon[52534]: mgrmap e1: no daemons active 2026-03-10T13:22:22.126 INFO:teuthology.orchestra.run.vm02.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626@mgr.a 2026-03-10T13:22:22.126 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stderr Failed to reset failed state of unit ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626@mgr.a.service: Unit ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626@mgr.a.service not loaded. 2026-03-10T13:22:22.265 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626.target.wants/ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626@mgr.a.service → /etc/systemd/system/ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626@.service. 2026-03-10T13:22:22.279 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:22 vm02 systemd[1]: Starting Ceph mgr.a for f4876d10-1c83-11f1-ae9f-3f8bea697626... 2026-03-10T13:22:22.440 INFO:teuthology.orchestra.run.vm02.stdout:firewalld does not appear to be present 2026-03-10T13:22:22.440 INFO:teuthology.orchestra.run.vm02.stdout:Not possible to enable service . firewalld.service is not available 2026-03-10T13:22:22.440 INFO:teuthology.orchestra.run.vm02.stdout:firewalld does not appear to be present 2026-03-10T13:22:22.440 INFO:teuthology.orchestra.run.vm02.stdout:Not possible to open ports <[9283, 8765]>. firewalld.service is not available 2026-03-10T13:22:22.440 INFO:teuthology.orchestra.run.vm02.stdout:Waiting for mgr to start... 2026-03-10T13:22:22.440 INFO:teuthology.orchestra.run.vm02.stdout:Waiting for mgr... 2026-03-10T13:22:22.538 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:22 vm02 podman[52733]: 2026-03-10 13:22:22.390906196 +0000 UTC m=+0.020220522 container create 73f7e11261b700e6e35dd912f1b32e0d9523255d92c9c32d8ffcc0583a2660fd (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) 2026-03-10T13:22:22.538 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:22 vm02 podman[52733]: 2026-03-10 13:22:22.426092129 +0000 UTC m=+0.055406446 container init 73f7e11261b700e6e35dd912f1b32e0d9523255d92c9c32d8ffcc0583a2660fd (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-10T13:22:22.538 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:22 vm02 podman[52733]: 2026-03-10 13:22:22.42923108 +0000 UTC m=+0.058545406 container start 73f7e11261b700e6e35dd912f1b32e0d9523255d92c9c32d8ffcc0583a2660fd (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-10T13:22:22.538 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:22 vm02 bash[52733]: 73f7e11261b700e6e35dd912f1b32e0d9523255d92c9c32d8ffcc0583a2660fd 2026-03-10T13:22:22.538 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:22 vm02 podman[52733]: 2026-03-10 13:22:22.382960125 +0000 UTC m=+0.012274460 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T13:22:22.538 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:22 vm02 systemd[1]: Started Ceph mgr.a for f4876d10-1c83-11f1-ae9f-3f8bea697626. 2026-03-10T13:22:22.688 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-10T13:22:22.688 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout { 2026-03-10T13:22:22.688 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "fsid": "f4876d10-1c83-11f1-ae9f-3f8bea697626", 2026-03-10T13:22:22.688 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T13:22:22.688 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T13:22:22.688 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T13:22:22.688 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T13:22:22.688 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:22:22.689 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T13:22:22.689 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T13:22:22.689 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 0 2026-03-10T13:22:22.689 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ], 2026-03-10T13:22:22.689 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T13:22:22.689 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "a" 2026-03-10T13:22:22.689 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ], 2026-03-10T13:22:22.689 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "quorum_age": 0, 2026-03-10T13:22:22.689 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T13:22:22.689 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T13:22:22.689 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T13:22:22.689 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T13:22:22.689 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:22:22.689 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T13:22:22.689 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T13:22:22.689 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T13:22:22.689 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T13:22:22.689 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T13:22:22.689 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T13:22:22.689 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T13:22:22.689 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T13:22:22.689 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:22:22.689 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T13:22:22.689 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T13:22:22.689 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T13:22:22.689 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T13:22:22.689 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T13:22:22.689 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T13:22:22.689 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T13:22:22.689 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T13:22:22.689 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T13:22:22.689 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:22:22.689 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T13:22:22.689 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T13:22:22.689 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T13:22:20:678763+0000", 2026-03-10T13:22:22.689 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T13:22:22.689 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T13:22:22.689 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:22:22.689 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T13:22:22.689 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-10T13:22:22.690 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T13:22:22.690 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T13:22:22.690 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T13:22:22.690 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T13:22:22.690 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T13:22:22.690 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ], 2026-03-10T13:22:22.690 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T13:22:22.690 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:22:22.690 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T13:22:22.690 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T13:22:22.690 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T13:22:20.679501+0000", 2026-03-10T13:22:22.690 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T13:22:22.690 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:22:22.690 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T13:22:22.690 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout } 2026-03-10T13:22:22.690 INFO:teuthology.orchestra.run.vm02.stdout:mgr not available, waiting (1/15)... 2026-03-10T13:22:22.844 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:22 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:22.553+0000 7f4adedbf140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T13:22:22.844 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:22 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:22.599+0000 7f4adedbf140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T13:22:23.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:22 vm02 ceph-mon[52534]: from='client.? 192.168.123.102:0/3793904964' entity='client.admin' 2026-03-10T13:22:23.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:22 vm02 ceph-mon[52534]: from='client.? 192.168.123.102:0/688084520' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T13:22:23.344 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:23 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:23.056+0000 7f4adedbf140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T13:22:23.844 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:23 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:23.408+0000 7f4adedbf140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T13:22:23.844 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:23 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T13:22:23.844 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:23 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T13:22:23.844 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:23 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: from numpy import show_config as show_numpy_config 2026-03-10T13:22:23.844 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:23 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:23.508+0000 7f4adedbf140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T13:22:23.844 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:23 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:23.550+0000 7f4adedbf140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T13:22:23.844 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:23 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:23.628+0000 7f4adedbf140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T13:22:24.460 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:24 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:24.174+0000 7f4adedbf140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T13:22:24.460 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:24 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:24.294+0000 7f4adedbf140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T13:22:24.460 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:24 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:24.337+0000 7f4adedbf140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T13:22:24.460 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:24 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:24.375+0000 7f4adedbf140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T13:22:24.460 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:24 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:24.419+0000 7f4adedbf140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T13:22:24.460 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:24 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:24.459+0000 7f4adedbf140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T13:22:24.731 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:24 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:24.642+0000 7f4adedbf140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T13:22:24.731 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:24 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:24.697+0000 7f4adedbf140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T13:22:24.902 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-10T13:22:24.902 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout { 2026-03-10T13:22:24.903 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "fsid": "f4876d10-1c83-11f1-ae9f-3f8bea697626", 2026-03-10T13:22:24.903 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T13:22:24.903 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T13:22:24.903 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T13:22:24.903 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T13:22:24.903 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:22:24.903 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T13:22:24.903 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T13:22:24.903 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 0 2026-03-10T13:22:24.903 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ], 2026-03-10T13:22:24.903 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T13:22:24.903 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "a" 2026-03-10T13:22:24.903 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ], 2026-03-10T13:22:24.903 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "quorum_age": 3, 2026-03-10T13:22:24.903 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T13:22:24.903 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T13:22:24.903 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T13:22:24.903 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T13:22:24.903 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:22:24.903 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T13:22:24.903 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T13:22:24.903 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T13:22:24.903 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T13:22:24.903 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T13:22:24.903 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T13:22:24.903 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T13:22:24.903 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T13:22:24.903 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:22:24.903 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T13:22:24.903 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T13:22:24.903 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T13:22:24.903 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T13:22:24.903 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T13:22:24.903 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T13:22:24.903 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T13:22:24.903 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T13:22:24.903 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T13:22:24.903 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:22:24.903 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T13:22:24.903 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T13:22:24.903 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T13:22:20:678763+0000", 2026-03-10T13:22:24.903 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T13:22:24.903 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T13:22:24.903 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:22:24.903 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T13:22:24.903 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-10T13:22:24.904 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T13:22:24.904 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T13:22:24.904 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T13:22:24.904 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T13:22:24.904 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T13:22:24.904 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ], 2026-03-10T13:22:24.904 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T13:22:24.904 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:22:24.904 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T13:22:24.904 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T13:22:24.904 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T13:22:20.679501+0000", 2026-03-10T13:22:24.904 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T13:22:24.904 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:22:24.904 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T13:22:24.904 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout } 2026-03-10T13:22:24.904 INFO:teuthology.orchestra.run.vm02.stdout:mgr not available, waiting (2/15)... 2026-03-10T13:22:25.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:24 vm02 ceph-mon[52534]: from='client.? 192.168.123.102:0/529552054' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T13:22:25.094 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:24 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:24.974+0000 7f4adedbf140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T13:22:25.564 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:25 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:25.279+0000 7f4adedbf140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T13:22:25.564 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:25 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:25.317+0000 7f4adedbf140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T13:22:25.564 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:25 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:25.361+0000 7f4adedbf140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T13:22:25.564 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:25 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:25.442+0000 7f4adedbf140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T13:22:25.564 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:25 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:25.481+0000 7f4adedbf140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T13:22:25.564 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:25 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:25.562+0000 7f4adedbf140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T13:22:25.829 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:25 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:25.680+0000 7f4adedbf140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T13:22:25.829 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:25 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:25.827+0000 7f4adedbf140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T13:22:26.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:26 vm02 ceph-mon[52534]: Activating manager daemon a 2026-03-10T13:22:26.094 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:25 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:25.868+0000 7f4adedbf140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T13:22:27.185 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:27 vm02 ceph-mon[52534]: mgrmap e2: a(active, starting, since 0.103737s) 2026-03-10T13:22:27.185 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:27 vm02 ceph-mon[52534]: from='mgr.14100 192.168.123.102:0/1703414070' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T13:22:27.186 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:27 vm02 ceph-mon[52534]: from='mgr.14100 192.168.123.102:0/1703414070' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T13:22:27.186 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:27 vm02 ceph-mon[52534]: from='mgr.14100 192.168.123.102:0/1703414070' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T13:22:27.186 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:27 vm02 ceph-mon[52534]: from='mgr.14100 192.168.123.102:0/1703414070' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T13:22:27.186 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:27 vm02 ceph-mon[52534]: from='mgr.14100 192.168.123.102:0/1703414070' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T13:22:27.186 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:27 vm02 ceph-mon[52534]: from='mgr.14100 192.168.123.102:0/1703414070' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T13:22:27.186 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:27 vm02 ceph-mon[52534]: from='mgr.14100 192.168.123.102:0/1703414070' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:22:27.186 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:27 vm02 ceph-mon[52534]: from='mgr.14100 192.168.123.102:0/1703414070' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T13:22:27.186 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:27 vm02 ceph-mon[52534]: Manager daemon a is now available 2026-03-10T13:22:27.186 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:27 vm02 ceph-mon[52534]: from='mgr.14100 192.168.123.102:0/1703414070' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T13:22:27.186 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:27 vm02 ceph-mon[52534]: from='mgr.14100 192.168.123.102:0/1703414070' entity='mgr.a' 2026-03-10T13:22:27.186 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:27 vm02 ceph-mon[52534]: from='mgr.14100 192.168.123.102:0/1703414070' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T13:22:27.186 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:27 vm02 ceph-mon[52534]: from='mgr.14100 192.168.123.102:0/1703414070' entity='mgr.a' 2026-03-10T13:22:27.186 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:27 vm02 ceph-mon[52534]: from='mgr.14100 192.168.123.102:0/1703414070' entity='mgr.a' 2026-03-10T13:22:27.215 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-10T13:22:27.215 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout { 2026-03-10T13:22:27.215 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "fsid": "f4876d10-1c83-11f1-ae9f-3f8bea697626", 2026-03-10T13:22:27.215 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T13:22:27.215 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T13:22:27.215 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T13:22:27.215 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T13:22:27.215 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:22:27.215 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T13:22:27.215 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T13:22:27.215 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 0 2026-03-10T13:22:27.215 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ], 2026-03-10T13:22:27.215 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T13:22:27.215 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "a" 2026-03-10T13:22:27.215 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ], 2026-03-10T13:22:27.215 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "quorum_age": 5, 2026-03-10T13:22:27.215 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T13:22:27.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T13:22:27.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T13:22:27.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T13:22:27.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:22:27.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T13:22:27.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T13:22:27.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T13:22:27.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T13:22:27.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T13:22:27.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T13:22:27.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T13:22:27.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T13:22:27.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:22:27.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T13:22:27.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T13:22:27.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T13:22:27.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T13:22:27.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T13:22:27.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T13:22:27.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T13:22:27.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T13:22:27.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T13:22:27.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:22:27.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T13:22:27.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T13:22:27.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T13:22:20:678763+0000", 2026-03-10T13:22:27.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T13:22:27.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T13:22:27.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:22:27.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T13:22:27.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T13:22:27.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T13:22:27.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T13:22:27.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T13:22:27.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T13:22:27.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T13:22:27.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ], 2026-03-10T13:22:27.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T13:22:27.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:22:27.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T13:22:27.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T13:22:27.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T13:22:20.679501+0000", 2026-03-10T13:22:27.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T13:22:27.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:22:27.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T13:22:27.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout } 2026-03-10T13:22:27.217 INFO:teuthology.orchestra.run.vm02.stdout:mgr is available 2026-03-10T13:22:27.471 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-10T13:22:27.471 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout [global] 2026-03-10T13:22:27.471 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout fsid = f4876d10-1c83-11f1-ae9f-3f8bea697626 2026-03-10T13:22:27.471 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-10T13:22:27.471 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.102:3300,v1:192.168.123.102:6789] 2026-03-10T13:22:27.471 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-10T13:22:27.471 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-10T13:22:27.471 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-10T13:22:27.471 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-10T13:22:27.471 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-10T13:22:27.471 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-10T13:22:27.471 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-10T13:22:27.471 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-10T13:22:27.471 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout [osd] 2026-03-10T13:22:27.471 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-10T13:22:27.471 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-10T13:22:27.471 INFO:teuthology.orchestra.run.vm02.stdout:Enabling cephadm module... 2026-03-10T13:22:28.325 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:28 vm02 ceph-mon[52534]: mgrmap e3: a(active, since 1.16132s) 2026-03-10T13:22:28.325 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:28 vm02 ceph-mon[52534]: from='client.? 192.168.123.102:0/1672215978' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T13:22:28.325 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:28 vm02 ceph-mon[52534]: from='client.? 192.168.123.102:0/3032808785' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T13:22:28.325 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:28 vm02 ceph-mon[52534]: from='client.? 192.168.123.102:0/683267852' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-10T13:22:28.325 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:28 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: ignoring --setuser ceph since I am not root 2026-03-10T13:22:28.325 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:28 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: ignoring --setgroup ceph since I am not root 2026-03-10T13:22:28.325 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:28 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:28.198+0000 7fa9133dc140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T13:22:28.325 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:28 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:28.242+0000 7fa9133dc140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T13:22:28.364 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout { 2026-03-10T13:22:28.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 4, 2026-03-10T13:22:28.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T13:22:28.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "active_name": "a", 2026-03-10T13:22:28.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-10T13:22:28.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout } 2026-03-10T13:22:28.365 INFO:teuthology.orchestra.run.vm02.stdout:Waiting for the mgr to restart... 2026-03-10T13:22:28.365 INFO:teuthology.orchestra.run.vm02.stdout:Waiting for mgr epoch 4... 2026-03-10T13:22:29.036 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:28 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:28.718+0000 7fa9133dc140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T13:22:29.305 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:29 vm02 ceph-mon[52534]: from='client.? 192.168.123.102:0/683267852' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-10T13:22:29.305 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:29 vm02 ceph-mon[52534]: mgrmap e4: a(active, since 2s) 2026-03-10T13:22:29.305 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:29 vm02 ceph-mon[52534]: from='client.? 192.168.123.102:0/324200833' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T13:22:29.305 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:29 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:29.087+0000 7fa9133dc140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T13:22:29.305 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:29 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T13:22:29.305 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:29 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T13:22:29.305 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:29 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: from numpy import show_config as show_numpy_config 2026-03-10T13:22:29.305 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:29 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:29.183+0000 7fa9133dc140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T13:22:29.305 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:29 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:29.223+0000 7fa9133dc140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T13:22:29.305 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:29 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:29.304+0000 7fa9133dc140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T13:22:30.118 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:29 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:29.847+0000 7fa9133dc140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T13:22:30.118 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:29 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:29.978+0000 7fa9133dc140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T13:22:30.118 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:30 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:30.023+0000 7fa9133dc140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T13:22:30.118 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:30 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:30.070+0000 7fa9133dc140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T13:22:30.118 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:30 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:30.116+0000 7fa9133dc140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T13:22:30.406 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:30 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:30.159+0000 7fa9133dc140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T13:22:30.406 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:30 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:30.350+0000 7fa9133dc140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T13:22:30.844 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:30 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:30.404+0000 7fa9133dc140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T13:22:30.844 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:30 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:30.647+0000 7fa9133dc140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T13:22:31.254 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:30 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:30.949+0000 7fa9133dc140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T13:22:31.254 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:30 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:30.990+0000 7fa9133dc140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T13:22:31.254 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:31 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:31.035+0000 7fa9133dc140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T13:22:31.254 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:31 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:31.122+0000 7fa9133dc140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T13:22:31.255 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:31 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:31.161+0000 7fa9133dc140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T13:22:31.523 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:31 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:31.253+0000 7fa9133dc140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T13:22:31.523 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:31 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:31.375+0000 7fa9133dc140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T13:22:31.844 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:31 vm02 ceph-mon[52534]: Active manager daemon a restarted 2026-03-10T13:22:31.844 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:31 vm02 ceph-mon[52534]: Activating manager daemon a 2026-03-10T13:22:31.844 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:31 vm02 ceph-mon[52534]: osdmap e2: 0 total, 0 up, 0 in 2026-03-10T13:22:31.844 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:31 vm02 ceph-mon[52534]: mgrmap e5: a(active, starting, since 0.00512796s) 2026-03-10T13:22:31.844 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:31 vm02 ceph-mon[52534]: from='mgr.14118 192.168.123.102:0/3136330883' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:22:31.844 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:31 vm02 ceph-mon[52534]: from='mgr.14118 192.168.123.102:0/3136330883' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T13:22:31.844 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:31 vm02 ceph-mon[52534]: from='mgr.14118 192.168.123.102:0/3136330883' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T13:22:31.844 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:31 vm02 ceph-mon[52534]: from='mgr.14118 192.168.123.102:0/3136330883' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T13:22:31.844 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:31 vm02 ceph-mon[52534]: from='mgr.14118 192.168.123.102:0/3136330883' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T13:22:31.844 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:31 vm02 ceph-mon[52534]: Manager daemon a is now available 2026-03-10T13:22:31.844 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:31 vm02 ceph-mon[52534]: from='mgr.14118 192.168.123.102:0/3136330883' entity='mgr.a' 2026-03-10T13:22:31.844 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:31 vm02 ceph-mon[52534]: from='mgr.14118 192.168.123.102:0/3136330883' entity='mgr.a' 2026-03-10T13:22:31.844 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:31 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:31.521+0000 7fa9133dc140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T13:22:31.845 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:31 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:31.560+0000 7fa9133dc140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T13:22:32.622 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout { 2026-03-10T13:22:32.622 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 6, 2026-03-10T13:22:32.622 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-10T13:22:32.622 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout } 2026-03-10T13:22:32.622 INFO:teuthology.orchestra.run.vm02.stdout:mgr epoch 4 is available 2026-03-10T13:22:32.622 INFO:teuthology.orchestra.run.vm02.stdout:Setting orchestrator backend to cephadm... 2026-03-10T13:22:32.844 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:32 vm02 ceph-mon[52534]: Found migration_current of "None". Setting to last migration. 2026-03-10T13:22:32.844 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:32 vm02 ceph-mon[52534]: from='mgr.14118 192.168.123.102:0/3136330883' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T13:22:32.844 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:32 vm02 ceph-mon[52534]: from='mgr.14118 192.168.123.102:0/3136330883' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:22:32.844 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:32 vm02 ceph-mon[52534]: from='mgr.14118 192.168.123.102:0/3136330883' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:22:32.844 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:32 vm02 ceph-mon[52534]: from='mgr.14118 192.168.123.102:0/3136330883' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T13:22:32.844 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:32 vm02 ceph-mon[52534]: from='mgr.14118 192.168.123.102:0/3136330883' entity='mgr.a' 2026-03-10T13:22:32.844 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:32 vm02 ceph-mon[52534]: from='mgr.14118 192.168.123.102:0/3136330883' entity='mgr.a' 2026-03-10T13:22:32.844 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:32 vm02 ceph-mon[52534]: mgrmap e6: a(active, since 1.00965s) 2026-03-10T13:22:33.191 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout value unchanged 2026-03-10T13:22:33.191 INFO:teuthology.orchestra.run.vm02.stdout:Generating ssh key... 2026-03-10T13:22:33.709 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:33 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: Generating public/private ed25519 key pair. 2026-03-10T13:22:33.709 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:33 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: Your identification has been saved in /tmp/tmpbbcpuqxs/key 2026-03-10T13:22:33.709 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:33 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: Your public key has been saved in /tmp/tmpbbcpuqxs/key.pub 2026-03-10T13:22:33.710 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:33 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: The key fingerprint is: 2026-03-10T13:22:33.710 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:33 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: SHA256:peXm8zhcfavc954m/HPu8ABEhHspLgjXemQItRg+fXk ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626 2026-03-10T13:22:33.710 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:33 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: The key's randomart image is: 2026-03-10T13:22:33.710 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:33 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: +--[ED25519 256]--+ 2026-03-10T13:22:33.710 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:33 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: | ... oo | 2026-03-10T13:22:33.710 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:33 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: | ..+ . ... | 2026-03-10T13:22:33.710 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:33 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: | +.ooo E... | 2026-03-10T13:22:33.710 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:33 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: | ..o.+*o.o | 2026-03-10T13:22:33.710 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:33 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: | o =S.oo.. | 2026-03-10T13:22:33.710 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:33 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: | o oo. ... .| 2026-03-10T13:22:33.710 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:33 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: | . oo.. o..| 2026-03-10T13:22:33.710 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:33 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: | o+.o.B+| 2026-03-10T13:22:33.710 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:33 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: | ...o=B@| 2026-03-10T13:22:33.710 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:33 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: +----[SHA256]-----+ 2026-03-10T13:22:33.758 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDBvx5RmI2UUBYjpdn1ULhCo8P6W1CK7QbMXhpKRSR5q ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626 2026-03-10T13:22:33.759 INFO:teuthology.orchestra.run.vm02.stdout:Wrote public SSH key to /home/ubuntu/cephtest/ceph.pub 2026-03-10T13:22:33.759 INFO:teuthology.orchestra.run.vm02.stdout:Adding key to root@localhost authorized_keys... 2026-03-10T13:22:33.759 INFO:teuthology.orchestra.run.vm02.stdout:Adding host vm02... 2026-03-10T13:22:34.004 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:33 vm02 ceph-mon[52534]: from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T13:22:34.004 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:33 vm02 ceph-mon[52534]: from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T13:22:34.004 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:33 vm02 ceph-mon[52534]: from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:22:34.004 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:33 vm02 ceph-mon[52534]: from='mgr.14118 192.168.123.102:0/3136330883' entity='mgr.a' 2026-03-10T13:22:34.004 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:33 vm02 ceph-mon[52534]: from='mgr.14118 192.168.123.102:0/3136330883' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:22:34.004 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:33 vm02 ceph-mon[52534]: [10/Mar/2026:13:22:32] ENGINE Bus STARTING 2026-03-10T13:22:34.004 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:33 vm02 ceph-mon[52534]: [10/Mar/2026:13:22:33] ENGINE Serving on http://192.168.123.102:8765 2026-03-10T13:22:34.004 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:33 vm02 ceph-mon[52534]: from='mgr.14118 192.168.123.102:0/3136330883' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:22:34.004 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:33 vm02 ceph-mon[52534]: from='mgr.14118 192.168.123.102:0/3136330883' entity='mgr.a' 2026-03-10T13:22:34.004 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:33 vm02 ceph-mon[52534]: from='mgr.14118 192.168.123.102:0/3136330883' entity='mgr.a' 2026-03-10T13:22:35.202 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:34 vm02 ceph-mon[52534]: [10/Mar/2026:13:22:33] ENGINE Serving on https://192.168.123.102:7150 2026-03-10T13:22:35.202 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:34 vm02 ceph-mon[52534]: [10/Mar/2026:13:22:33] ENGINE Bus STARTED 2026-03-10T13:22:35.202 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:34 vm02 ceph-mon[52534]: [10/Mar/2026:13:22:33] ENGINE Client ('192.168.123.102', 53916) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T13:22:35.202 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:34 vm02 ceph-mon[52534]: from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:22:35.202 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:34 vm02 ceph-mon[52534]: from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:22:35.202 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:34 vm02 ceph-mon[52534]: Generating ssh key... 2026-03-10T13:22:35.202 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:34 vm02 ceph-mon[52534]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:22:35.202 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:34 vm02 ceph-mon[52534]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm02", "addr": "192.168.123.102", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:22:35.202 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:34 vm02 ceph-mon[52534]: mgrmap e7: a(active, since 2s) 2026-03-10T13:22:35.582 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout Added host 'vm02' with addr '192.168.123.102' 2026-03-10T13:22:35.582 INFO:teuthology.orchestra.run.vm02.stdout:Deploying unmanaged mon service... 2026-03-10T13:22:35.878 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout Scheduled mon update... 2026-03-10T13:22:35.878 INFO:teuthology.orchestra.run.vm02.stdout:Deploying unmanaged mgr service... 2026-03-10T13:22:36.136 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:35 vm02 ceph-mon[52534]: Deploying cephadm binary to vm02 2026-03-10T13:22:36.136 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:35 vm02 ceph-mon[52534]: from='mgr.14118 192.168.123.102:0/3136330883' entity='mgr.a' 2026-03-10T13:22:36.136 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:35 vm02 ceph-mon[52534]: from='mgr.14118 192.168.123.102:0/3136330883' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:22:36.136 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:35 vm02 ceph-mon[52534]: from='mgr.14118 192.168.123.102:0/3136330883' entity='mgr.a' 2026-03-10T13:22:36.185 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout Scheduled mgr update... 2026-03-10T13:22:37.548 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:37 vm02 ceph-mon[52534]: Added host vm02 2026-03-10T13:22:37.548 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:37 vm02 ceph-mon[52534]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:22:37.548 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:37 vm02 ceph-mon[52534]: Saving service mon spec with placement count:5 2026-03-10T13:22:37.548 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:37 vm02 ceph-mon[52534]: from='mgr.14118 192.168.123.102:0/3136330883' entity='mgr.a' 2026-03-10T13:22:37.548 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:37 vm02 ceph-mon[52534]: from='client.? 192.168.123.102:0/427382553' entity='client.admin' 2026-03-10T13:22:37.548 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:37 vm02 ceph-mon[52534]: from='client.? 192.168.123.102:0/2993451660' entity='client.admin' 2026-03-10T13:22:37.558 INFO:teuthology.orchestra.run.vm02.stdout:Enabling the dashboard module... 2026-03-10T13:22:38.454 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:38 vm02 ceph-mon[52534]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:22:38.454 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:38 vm02 ceph-mon[52534]: Saving service mgr spec with placement count:2 2026-03-10T13:22:38.454 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:38 vm02 ceph-mon[52534]: from='mgr.14118 192.168.123.102:0/3136330883' entity='mgr.a' 2026-03-10T13:22:38.454 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:38 vm02 ceph-mon[52534]: from='client.? 192.168.123.102:0/4280623592' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-10T13:22:39.244 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:39 vm02 ceph-mon[52534]: from='mgr.14118 192.168.123.102:0/3136330883' entity='mgr.a' 2026-03-10T13:22:39.245 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:39 vm02 ceph-mon[52534]: from='mgr.14118 192.168.123.102:0/3136330883' entity='mgr.a' 2026-03-10T13:22:39.245 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:39 vm02 ceph-mon[52534]: from='client.? 192.168.123.102:0/4280623592' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-10T13:22:39.245 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:39 vm02 ceph-mon[52534]: mgrmap e8: a(active, since 7s) 2026-03-10T13:22:39.245 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:39 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: ignoring --setuser ceph since I am not root 2026-03-10T13:22:39.245 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:39 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: ignoring --setgroup ceph since I am not root 2026-03-10T13:22:39.245 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:39 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:39.197+0000 7f5069d5a140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T13:22:39.351 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout { 2026-03-10T13:22:39.351 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 8, 2026-03-10T13:22:39.351 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T13:22:39.351 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "active_name": "a", 2026-03-10T13:22:39.351 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-10T13:22:39.351 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout } 2026-03-10T13:22:39.351 INFO:teuthology.orchestra.run.vm02.stdout:Waiting for the mgr to restart... 2026-03-10T13:22:39.352 INFO:teuthology.orchestra.run.vm02.stdout:Waiting for mgr epoch 8... 2026-03-10T13:22:39.594 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:39 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:39.243+0000 7f5069d5a140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T13:22:40.094 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:39 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:39.739+0000 7f5069d5a140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T13:22:40.594 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:40 vm02 ceph-mon[52534]: from='client.? 192.168.123.102:0/2185371415' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T13:22:40.594 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:40 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:40.097+0000 7f5069d5a140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T13:22:40.594 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:40 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T13:22:40.594 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:40 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T13:22:40.594 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:40 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: from numpy import show_config as show_numpy_config 2026-03-10T13:22:40.594 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:40 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:40.195+0000 7f5069d5a140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T13:22:40.594 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:40 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:40.242+0000 7f5069d5a140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T13:22:40.594 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:40 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:40.324+0000 7f5069d5a140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T13:22:41.141 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:40 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:40.849+0000 7f5069d5a140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T13:22:41.141 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:40 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:40.969+0000 7f5069d5a140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T13:22:41.141 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:41 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:41.015+0000 7f5069d5a140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T13:22:41.141 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:41 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:41.052+0000 7f5069d5a140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T13:22:41.141 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:41 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:41.097+0000 7f5069d5a140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T13:22:41.141 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:41 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:41.139+0000 7f5069d5a140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T13:22:41.391 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:41 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:41.331+0000 7f5069d5a140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T13:22:41.391 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:41 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:41.389+0000 7f5069d5a140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T13:22:41.972 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:41 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:41.645+0000 7f5069d5a140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T13:22:41.972 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:41 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:41.971+0000 7f5069d5a140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T13:22:42.277 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:42 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:42.009+0000 7f5069d5a140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T13:22:42.277 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:42 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:42.055+0000 7f5069d5a140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T13:22:42.277 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:42 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:42.144+0000 7f5069d5a140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T13:22:42.277 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:42 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:42.186+0000 7f5069d5a140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T13:22:42.277 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:42 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:42.275+0000 7f5069d5a140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T13:22:42.567 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:42 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:42.408+0000 7f5069d5a140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T13:22:42.567 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:42 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:42.566+0000 7f5069d5a140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T13:22:42.844 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:42 vm02 ceph-mon[52534]: Active manager daemon a restarted 2026-03-10T13:22:42.844 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:42 vm02 ceph-mon[52534]: Activating manager daemon a 2026-03-10T13:22:42.844 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:42 vm02 ceph-mon[52534]: osdmap e3: 0 total, 0 up, 0 in 2026-03-10T13:22:42.844 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:42 vm02 ceph-mon[52534]: mgrmap e9: a(active, starting, since 0.00656758s) 2026-03-10T13:22:42.844 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:42 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:22:42.844 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:42 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T13:22:42.844 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:42 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T13:22:42.844 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:42 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T13:22:42.844 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:42 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T13:22:42.844 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:42 vm02 ceph-mon[52534]: Manager daemon a is now available 2026-03-10T13:22:42.844 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:42 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:22:42.844 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:42 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T13:22:42.844 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:22:42 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a[52743]: 2026-03-10T13:22:42.607+0000 7f5069d5a140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T13:22:43.707 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout { 2026-03-10T13:22:43.707 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 10, 2026-03-10T13:22:43.707 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-10T13:22:43.707 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout } 2026-03-10T13:22:43.707 INFO:teuthology.orchestra.run.vm02.stdout:mgr epoch 8 is available 2026-03-10T13:22:43.707 INFO:teuthology.orchestra.run.vm02.stdout:Generating a dashboard self-signed certificate... 2026-03-10T13:22:44.054 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:43 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T13:22:44.055 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:43 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' 2026-03-10T13:22:44.055 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:43 vm02 ceph-mon[52534]: mgrmap e10: a(active, since 1.01146s) 2026-03-10T13:22:44.101 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout Self-signed certificate created 2026-03-10T13:22:44.101 INFO:teuthology.orchestra.run.vm02.stdout:Creating initial admin user... 2026-03-10T13:22:44.565 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout {"username": "admin", "password": "$2b$12$fIE7Lo8CAHP4kFNGhk.rp.BvsE/UQis2BUYgNf4RtIv.k4EYZH6uS", "roles": ["administrator"], "name": null, "email": null, "lastUpdate": 1773148964, "enabled": true, "pwdExpirationDate": null, "pwdUpdateRequired": true} 2026-03-10T13:22:44.566 INFO:teuthology.orchestra.run.vm02.stdout:Fetching dashboard port number... 2026-03-10T13:22:44.873 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 8443 2026-03-10T13:22:44.873 INFO:teuthology.orchestra.run.vm02.stdout:firewalld does not appear to be present 2026-03-10T13:22:44.873 INFO:teuthology.orchestra.run.vm02.stdout:Not possible to open ports <[8443]>. firewalld.service is not available 2026-03-10T13:22:44.875 INFO:teuthology.orchestra.run.vm02.stdout:Ceph Dashboard is now available at: 2026-03-10T13:22:44.875 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:22:44.875 INFO:teuthology.orchestra.run.vm02.stdout: URL: https://vm02.local:8443/ 2026-03-10T13:22:44.875 INFO:teuthology.orchestra.run.vm02.stdout: User: admin 2026-03-10T13:22:44.875 INFO:teuthology.orchestra.run.vm02.stdout: Password: hp0j0gbjdp 2026-03-10T13:22:44.875 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:22:44.875 INFO:teuthology.orchestra.run.vm02.stdout:Saving cluster configuration to /var/lib/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/config directory 2026-03-10T13:22:45.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:44 vm02 ceph-mon[52534]: [10/Mar/2026:13:22:43] ENGINE Bus STARTING 2026-03-10T13:22:45.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:44 vm02 ceph-mon[52534]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T13:22:45.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:44 vm02 ceph-mon[52534]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T13:22:45.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:44 vm02 ceph-mon[52534]: [10/Mar/2026:13:22:43] ENGINE Serving on http://192.168.123.102:8765 2026-03-10T13:22:45.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:44 vm02 ceph-mon[52534]: [10/Mar/2026:13:22:43] ENGINE Serving on https://192.168.123.102:7150 2026-03-10T13:22:45.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:44 vm02 ceph-mon[52534]: [10/Mar/2026:13:22:43] ENGINE Bus STARTED 2026-03-10T13:22:45.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:44 vm02 ceph-mon[52534]: [10/Mar/2026:13:22:43] ENGINE Client ('192.168.123.102', 43956) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T13:22:45.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:44 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' 2026-03-10T13:22:45.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:44 vm02 ceph-mon[52534]: from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:22:45.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:44 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' 2026-03-10T13:22:45.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:44 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' 2026-03-10T13:22:45.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:44 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' 2026-03-10T13:22:45.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:44 vm02 ceph-mon[52534]: from='client.? 192.168.123.102:0/4126136390' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-10T13:22:45.231 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr set mgr/dashboard/cluster/status 2026-03-10T13:22:45.232 INFO:teuthology.orchestra.run.vm02.stdout:You can access the Ceph CLI as following in case of multi-cluster or non-default config: 2026-03-10T13:22:45.232 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:22:45.232 INFO:teuthology.orchestra.run.vm02.stdout: sudo /home/ubuntu/cephtest/cephadm shell --fsid f4876d10-1c83-11f1-ae9f-3f8bea697626 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring 2026-03-10T13:22:45.232 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:22:45.232 INFO:teuthology.orchestra.run.vm02.stdout:Or, if you are only running a single cluster on this host: 2026-03-10T13:22:45.232 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:22:45.232 INFO:teuthology.orchestra.run.vm02.stdout: sudo /home/ubuntu/cephtest/cephadm shell 2026-03-10T13:22:45.232 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:22:45.232 INFO:teuthology.orchestra.run.vm02.stdout:Please consider enabling telemetry to help improve Ceph: 2026-03-10T13:22:45.232 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:22:45.232 INFO:teuthology.orchestra.run.vm02.stdout: ceph telemetry on 2026-03-10T13:22:45.232 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:22:45.232 INFO:teuthology.orchestra.run.vm02.stdout:For more information see: 2026-03-10T13:22:45.232 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:22:45.232 INFO:teuthology.orchestra.run.vm02.stdout: https://docs.ceph.com/en/latest/mgr/telemetry/ 2026-03-10T13:22:45.232 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:22:45.232 INFO:teuthology.orchestra.run.vm02.stdout:Bootstrap complete. 2026-03-10T13:22:45.263 INFO:tasks.cephadm:Fetching config... 2026-03-10T13:22:45.263 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T13:22:45.263 DEBUG:teuthology.orchestra.run.vm02:> dd if=/etc/ceph/ceph.conf of=/dev/stdout 2026-03-10T13:22:45.281 INFO:tasks.cephadm:Fetching client.admin keyring... 2026-03-10T13:22:45.281 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T13:22:45.281 DEBUG:teuthology.orchestra.run.vm02:> dd if=/etc/ceph/ceph.client.admin.keyring of=/dev/stdout 2026-03-10T13:22:45.365 INFO:tasks.cephadm:Fetching mon keyring... 2026-03-10T13:22:45.365 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T13:22:45.365 DEBUG:teuthology.orchestra.run.vm02:> sudo dd if=/var/lib/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/mon.a/keyring of=/dev/stdout 2026-03-10T13:22:45.437 INFO:tasks.cephadm:Fetching pub ssh key... 2026-03-10T13:22:45.437 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T13:22:45.437 DEBUG:teuthology.orchestra.run.vm02:> dd if=/home/ubuntu/cephtest/ceph.pub of=/dev/stdout 2026-03-10T13:22:45.496 INFO:tasks.cephadm:Installing pub ssh key for root users... 2026-03-10T13:22:45.496 DEBUG:teuthology.orchestra.run.vm02:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDBvx5RmI2UUBYjpdn1ULhCo8P6W1CK7QbMXhpKRSR5q ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-10T13:22:45.589 INFO:teuthology.orchestra.run.vm02.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDBvx5RmI2UUBYjpdn1ULhCo8P6W1CK7QbMXhpKRSR5q ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626 2026-03-10T13:22:45.606 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f4876d10-1c83-11f1-ae9f-3f8bea697626 -- ceph config set mgr mgr/cephadm/allow_ptrace true 2026-03-10T13:22:45.814 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:45 vm02 ceph-mon[52534]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:22:45.814 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:45 vm02 ceph-mon[52534]: from='client.? 192.168.123.102:0/2294125598' entity='client.admin' 2026-03-10T13:22:45.814 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:45 vm02 ceph-mon[52534]: mgrmap e11: a(active, since 2s) 2026-03-10T13:22:45.861 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/mon.a/config 2026-03-10T13:22:46.476 INFO:tasks.cephadm:Distributing conf and client.admin keyring to all hosts + 0755 2026-03-10T13:22:46.476 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f4876d10-1c83-11f1-ae9f-3f8bea697626 -- ceph orch client-keyring set client.admin '*' --mode 0755 2026-03-10T13:22:46.694 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/mon.a/config 2026-03-10T13:22:47.002 INFO:tasks.cephadm:Setting crush tunables to default 2026-03-10T13:22:47.002 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f4876d10-1c83-11f1-ae9f-3f8bea697626 -- ceph osd crush tunables default 2026-03-10T13:22:47.207 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/mon.a/config 2026-03-10T13:22:47.594 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:47 vm02 ceph-mon[52534]: from='client.? 192.168.123.102:0/2622283199' entity='client.admin' 2026-03-10T13:22:47.594 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:47 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' 2026-03-10T13:22:47.594 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:47 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' 2026-03-10T13:22:47.594 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:47 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:22:47.594 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:47 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' 2026-03-10T13:22:47.594 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:47 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:22:47.594 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:47 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' 2026-03-10T13:22:47.594 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:47 vm02 ceph-mon[52534]: from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:22:47.594 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:47 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' 2026-03-10T13:22:47.594 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:47 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:22:47.594 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:47 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:22:47.594 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:47 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:22:47.594 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:47 vm02 ceph-mon[52534]: Updating vm02:/etc/ceph/ceph.conf 2026-03-10T13:22:48.418 INFO:teuthology.orchestra.run.vm02.stderr:adjusted tunables profile to default 2026-03-10T13:22:48.464 INFO:tasks.cephadm:Adding mon.a on vm02 2026-03-10T13:22:48.464 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f4876d10-1c83-11f1-ae9f-3f8bea697626 -- ceph orch apply mon '1;vm02:192.168.123.102=a' 2026-03-10T13:22:48.648 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/mon.a/config 2026-03-10T13:22:48.671 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:48 vm02 ceph-mon[52534]: Updating vm02:/var/lib/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/config/ceph.conf 2026-03-10T13:22:48.671 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:48 vm02 ceph-mon[52534]: Updating vm02:/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:22:48.671 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:48 vm02 ceph-mon[52534]: from='client.? 192.168.123.102:0/2674169828' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-10T13:22:48.671 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:48 vm02 ceph-mon[52534]: Updating vm02:/var/lib/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/config/ceph.client.admin.keyring 2026-03-10T13:22:48.671 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:48 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' 2026-03-10T13:22:48.671 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:48 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' 2026-03-10T13:22:48.671 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:48 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' 2026-03-10T13:22:48.882 INFO:teuthology.orchestra.run.vm02.stdout:Scheduled mon update... 2026-03-10T13:22:48.957 INFO:tasks.cephadm:Waiting for 1 mons in monmap... 2026-03-10T13:22:48.957 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f4876d10-1c83-11f1-ae9f-3f8bea697626 -- ceph mon dump -f json 2026-03-10T13:22:49.200 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/mon.a/config 2026-03-10T13:22:49.505 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:22:49.506 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":1,"fsid":"f4876d10-1c83-11f1-ae9f-3f8bea697626","modified":"2026-03-10T13:22:19.575353Z","created":"2026-03-10T13:22:19.575353Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:3300","nonce":0},{"type":"v1","addr":"192.168.123.102:6789","nonce":0}]},"addr":"192.168.123.102:6789/0","public_addr":"192.168.123.102:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T13:22:49.506 INFO:teuthology.orchestra.run.vm02.stderr:dumped monmap epoch 1 2026-03-10T13:22:49.506 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:49 vm02 ceph-mon[52534]: from='client.? 192.168.123.102:0/2674169828' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-10T13:22:49.506 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:49 vm02 ceph-mon[52534]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T13:22:49.506 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:49 vm02 ceph-mon[52534]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "1;vm02:192.168.123.102=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:22:49.506 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:49 vm02 ceph-mon[52534]: Saving service mon spec with placement vm02:192.168.123.102=a;count:1 2026-03-10T13:22:49.506 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:49 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' 2026-03-10T13:22:49.506 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:49 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:22:49.506 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:49 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:22:49.506 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:49 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:22:49.506 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:49 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' 2026-03-10T13:22:49.506 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:49 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' 2026-03-10T13:22:49.506 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:49 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' 2026-03-10T13:22:49.506 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:49 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' 2026-03-10T13:22:49.506 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:49 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' 2026-03-10T13:22:49.506 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:49 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' 2026-03-10T13:22:49.506 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:49 vm02 ceph-mon[52534]: Reconfiguring mon.a (unknown last config time)... 2026-03-10T13:22:49.506 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:49 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T13:22:49.506 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:49 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T13:22:49.506 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:49 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:22:49.506 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:49 vm02 ceph-mon[52534]: Reconfiguring daemon mon.a on vm02 2026-03-10T13:22:49.506 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:49 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' 2026-03-10T13:22:49.506 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:49 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' 2026-03-10T13:22:49.568 INFO:tasks.cephadm:Generating final ceph.conf file... 2026-03-10T13:22:49.568 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f4876d10-1c83-11f1-ae9f-3f8bea697626 -- ceph config generate-minimal-conf 2026-03-10T13:22:49.740 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/mon.a/config 2026-03-10T13:22:49.985 INFO:teuthology.orchestra.run.vm02.stdout:# minimal ceph.conf for f4876d10-1c83-11f1-ae9f-3f8bea697626 2026-03-10T13:22:49.985 INFO:teuthology.orchestra.run.vm02.stdout:[global] 2026-03-10T13:22:49.985 INFO:teuthology.orchestra.run.vm02.stdout: fsid = f4876d10-1c83-11f1-ae9f-3f8bea697626 2026-03-10T13:22:49.985 INFO:teuthology.orchestra.run.vm02.stdout: mon_host = [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] 2026-03-10T13:22:50.064 INFO:tasks.cephadm:Distributing (final) config and client.admin keyring... 2026-03-10T13:22:50.065 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T13:22:50.065 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/etc/ceph/ceph.conf 2026-03-10T13:22:50.093 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T13:22:50.093 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:22:50.163 INFO:tasks.cephadm:Adding mgr.a on vm02 2026-03-10T13:22:50.163 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f4876d10-1c83-11f1-ae9f-3f8bea697626 -- ceph orch apply mgr '1;vm02=a' 2026-03-10T13:22:50.385 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/mon.a/config 2026-03-10T13:22:50.662 INFO:teuthology.orchestra.run.vm02.stdout:Scheduled mgr update... 2026-03-10T13:22:50.680 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:50 vm02 ceph-mon[52534]: mgrmap e12: a(active, since 6s) 2026-03-10T13:22:50.680 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:50 vm02 ceph-mon[52534]: from='client.? 192.168.123.102:0/1964677811' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T13:22:50.680 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:50 vm02 ceph-mon[52534]: from='client.? 192.168.123.102:0/2202112389' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:22:50.745 INFO:tasks.cephadm:Deploying OSDs... 2026-03-10T13:22:50.745 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T13:22:50.745 DEBUG:teuthology.orchestra.run.vm02:> dd if=/scratch_devs of=/dev/stdout 2026-03-10T13:22:50.764 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T13:22:50.764 DEBUG:teuthology.orchestra.run.vm02:> ls /dev/[sv]d? 2026-03-10T13:22:50.831 INFO:teuthology.orchestra.run.vm02.stdout:/dev/vda 2026-03-10T13:22:50.831 INFO:teuthology.orchestra.run.vm02.stdout:/dev/vdb 2026-03-10T13:22:50.831 INFO:teuthology.orchestra.run.vm02.stdout:/dev/vdc 2026-03-10T13:22:50.831 INFO:teuthology.orchestra.run.vm02.stdout:/dev/vdd 2026-03-10T13:22:50.831 INFO:teuthology.orchestra.run.vm02.stdout:/dev/vde 2026-03-10T13:22:50.831 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-10T13:22:50.831 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-10T13:22:50.831 DEBUG:teuthology.orchestra.run.vm02:> stat /dev/vdb 2026-03-10T13:22:50.892 INFO:teuthology.orchestra.run.vm02.stdout: File: /dev/vdb 2026-03-10T13:22:50.892 INFO:teuthology.orchestra.run.vm02.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T13:22:50.892 INFO:teuthology.orchestra.run.vm02.stdout:Device: 6h/6d Inode: 254 Links: 1 Device type: fc,10 2026-03-10T13:22:50.892 INFO:teuthology.orchestra.run.vm02.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T13:22:50.892 INFO:teuthology.orchestra.run.vm02.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T13:22:50.892 INFO:teuthology.orchestra.run.vm02.stdout:Access: 2026-03-10 13:22:46.348813525 +0000 2026-03-10T13:22:50.892 INFO:teuthology.orchestra.run.vm02.stdout:Modify: 2026-03-10 13:19:47.343749273 +0000 2026-03-10T13:22:50.892 INFO:teuthology.orchestra.run.vm02.stdout:Change: 2026-03-10 13:19:47.343749273 +0000 2026-03-10T13:22:50.892 INFO:teuthology.orchestra.run.vm02.stdout: Birth: 2026-03-10 13:16:41.300000000 +0000 2026-03-10T13:22:50.892 DEBUG:teuthology.orchestra.run.vm02:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-10T13:22:50.986 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records in 2026-03-10T13:22:50.986 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records out 2026-03-10T13:22:50.986 INFO:teuthology.orchestra.run.vm02.stderr:512 bytes copied, 0.000101209 s, 5.1 MB/s 2026-03-10T13:22:50.987 DEBUG:teuthology.orchestra.run.vm02:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-10T13:22:51.012 DEBUG:teuthology.orchestra.run.vm02:> stat /dev/vdc 2026-03-10T13:22:51.074 INFO:teuthology.orchestra.run.vm02.stdout: File: /dev/vdc 2026-03-10T13:22:51.074 INFO:teuthology.orchestra.run.vm02.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T13:22:51.074 INFO:teuthology.orchestra.run.vm02.stdout:Device: 6h/6d Inode: 255 Links: 1 Device type: fc,20 2026-03-10T13:22:51.074 INFO:teuthology.orchestra.run.vm02.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T13:22:51.074 INFO:teuthology.orchestra.run.vm02.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T13:22:51.074 INFO:teuthology.orchestra.run.vm02.stdout:Access: 2026-03-10 13:22:46.385813514 +0000 2026-03-10T13:22:51.074 INFO:teuthology.orchestra.run.vm02.stdout:Modify: 2026-03-10 13:19:47.344749275 +0000 2026-03-10T13:22:51.074 INFO:teuthology.orchestra.run.vm02.stdout:Change: 2026-03-10 13:19:47.344749275 +0000 2026-03-10T13:22:51.074 INFO:teuthology.orchestra.run.vm02.stdout: Birth: 2026-03-10 13:16:41.306000000 +0000 2026-03-10T13:22:51.074 DEBUG:teuthology.orchestra.run.vm02:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-10T13:22:51.139 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records in 2026-03-10T13:22:51.140 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records out 2026-03-10T13:22:51.140 INFO:teuthology.orchestra.run.vm02.stderr:512 bytes copied, 0.000218569 s, 2.3 MB/s 2026-03-10T13:22:51.140 DEBUG:teuthology.orchestra.run.vm02:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-10T13:22:51.199 DEBUG:teuthology.orchestra.run.vm02:> stat /dev/vdd 2026-03-10T13:22:51.260 INFO:teuthology.orchestra.run.vm02.stdout: File: /dev/vdd 2026-03-10T13:22:51.260 INFO:teuthology.orchestra.run.vm02.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T13:22:51.260 INFO:teuthology.orchestra.run.vm02.stdout:Device: 6h/6d Inode: 256 Links: 1 Device type: fc,30 2026-03-10T13:22:51.260 INFO:teuthology.orchestra.run.vm02.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T13:22:51.260 INFO:teuthology.orchestra.run.vm02.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T13:22:51.260 INFO:teuthology.orchestra.run.vm02.stdout:Access: 2026-03-10 13:22:46.435813500 +0000 2026-03-10T13:22:51.261 INFO:teuthology.orchestra.run.vm02.stdout:Modify: 2026-03-10 13:19:47.347749278 +0000 2026-03-10T13:22:51.261 INFO:teuthology.orchestra.run.vm02.stdout:Change: 2026-03-10 13:19:47.347749278 +0000 2026-03-10T13:22:51.261 INFO:teuthology.orchestra.run.vm02.stdout: Birth: 2026-03-10 13:16:41.315000000 +0000 2026-03-10T13:22:51.261 DEBUG:teuthology.orchestra.run.vm02:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-10T13:22:51.326 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records in 2026-03-10T13:22:51.326 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records out 2026-03-10T13:22:51.326 INFO:teuthology.orchestra.run.vm02.stderr:512 bytes copied, 0.000188252 s, 2.7 MB/s 2026-03-10T13:22:51.328 DEBUG:teuthology.orchestra.run.vm02:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-10T13:22:51.387 DEBUG:teuthology.orchestra.run.vm02:> stat /dev/vde 2026-03-10T13:22:51.446 INFO:teuthology.orchestra.run.vm02.stdout: File: /dev/vde 2026-03-10T13:22:51.446 INFO:teuthology.orchestra.run.vm02.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T13:22:51.446 INFO:teuthology.orchestra.run.vm02.stdout:Device: 6h/6d Inode: 257 Links: 1 Device type: fc,40 2026-03-10T13:22:51.446 INFO:teuthology.orchestra.run.vm02.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T13:22:51.446 INFO:teuthology.orchestra.run.vm02.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T13:22:51.446 INFO:teuthology.orchestra.run.vm02.stdout:Access: 2026-03-10 13:22:46.517813476 +0000 2026-03-10T13:22:51.446 INFO:teuthology.orchestra.run.vm02.stdout:Modify: 2026-03-10 13:19:47.346749277 +0000 2026-03-10T13:22:51.446 INFO:teuthology.orchestra.run.vm02.stdout:Change: 2026-03-10 13:19:47.346749277 +0000 2026-03-10T13:22:51.446 INFO:teuthology.orchestra.run.vm02.stdout: Birth: 2026-03-10 13:16:41.392000000 +0000 2026-03-10T13:22:51.446 DEBUG:teuthology.orchestra.run.vm02:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-10T13:22:51.515 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records in 2026-03-10T13:22:51.515 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records out 2026-03-10T13:22:51.515 INFO:teuthology.orchestra.run.vm02.stderr:512 bytes copied, 0.000204763 s, 2.5 MB/s 2026-03-10T13:22:51.517 DEBUG:teuthology.orchestra.run.vm02:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-10T13:22:51.576 INFO:tasks.cephadm:Deploying osd.0 on vm02 with /dev/vde... 2026-03-10T13:22:51.576 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f4876d10-1c83-11f1-ae9f-3f8bea697626 -- lvm zap /dev/vde 2026-03-10T13:22:51.797 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/mon.a/config 2026-03-10T13:22:51.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:51 vm02 ceph-mon[52534]: from='client.14182 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "1;vm02=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:22:51.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:51 vm02 ceph-mon[52534]: Saving service mgr spec with placement vm02=a;count:1 2026-03-10T13:22:51.896 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:51 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' 2026-03-10T13:22:51.896 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:51 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:22:51.896 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:51 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:22:51.896 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:51 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:22:51.896 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:51 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' 2026-03-10T13:22:51.896 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:51 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' 2026-03-10T13:22:51.896 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:51 vm02 ceph-mon[52534]: Reconfiguring mgr.a (unknown last config time)... 2026-03-10T13:22:51.896 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:51 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.a", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T13:22:51.896 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:51 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T13:22:51.896 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:51 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:22:51.896 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:51 vm02 ceph-mon[52534]: Reconfiguring daemon mgr.a on vm02 2026-03-10T13:22:51.896 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:51 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' 2026-03-10T13:22:51.896 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:51 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' 2026-03-10T13:22:52.612 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:22:52.637 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f4876d10-1c83-11f1-ae9f-3f8bea697626 -- ceph orch daemon add osd vm02:/dev/vde 2026-03-10T13:22:52.817 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/mon.a/config 2026-03-10T13:22:53.374 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:53 vm02 ceph-mon[52534]: from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:22:53.374 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:53 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T13:22:53.374 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:53 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T13:22:53.374 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:53 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:22:54.309 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:54 vm02 ceph-mon[52534]: from='client.? 192.168.123.102:0/2863068435' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "796001f7-50f6-444f-a6c7-b385e4f4101d"}]: dispatch 2026-03-10T13:22:54.309 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:54 vm02 ceph-mon[52534]: from='client.? 192.168.123.102:0/2863068435' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "796001f7-50f6-444f-a6c7-b385e4f4101d"}]': finished 2026-03-10T13:22:54.309 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:54 vm02 ceph-mon[52534]: osdmap e5: 1 total, 0 up, 1 in 2026-03-10T13:22:54.309 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:54 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:22:55.594 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:55 vm02 ceph-mon[52534]: from='client.? 192.168.123.102:0/3940851874' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T13:22:58.232 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:58 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T13:22:58.232 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:58 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:22:59.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:22:59 vm02 ceph-mon[52534]: Deploying daemon osd.0 on vm02 2026-03-10T13:23:00.211 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:00 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:23:00.211 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:00 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' 2026-03-10T13:23:00.211 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:00 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' 2026-03-10T13:23:01.252 INFO:teuthology.orchestra.run.vm02.stdout:Created osd(s) 0 on host 'vm02' 2026-03-10T13:23:01.314 DEBUG:teuthology.orchestra.run.vm02:osd.0> sudo journalctl -f -n 0 -u ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626@osd.0.service 2026-03-10T13:23:01.316 INFO:tasks.cephadm:Waiting for 1 OSDs to come up... 2026-03-10T13:23:01.316 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f4876d10-1c83-11f1-ae9f-3f8bea697626 -- ceph osd stat -f json 2026-03-10T13:23:01.606 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/mon.a/config 2026-03-10T13:23:01.771 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 13:23:01 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-osd-0[61705]: 2026-03-10T13:23:01.528+0000 7fc387372740 -1 osd.0 0 log_to_monitors true 2026-03-10T13:23:01.871 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:23:01.962 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":5,"num_osds":1,"num_up_osds":0,"osd_up_since":0,"num_in_osds":1,"osd_in_since":1773148973,"num_remapped_pgs":0} 2026-03-10T13:23:02.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:01 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' 2026-03-10T13:23:02.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:01 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' 2026-03-10T13:23:02.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:01 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:23:02.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:01 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:23:02.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:01 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' 2026-03-10T13:23:02.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:01 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:23:02.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:01 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' 2026-03-10T13:23:02.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:01 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' 2026-03-10T13:23:02.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:01 vm02 ceph-mon[52534]: from='osd.0 [v2:192.168.123.102:6802/3581614210,v1:192.168.123.102:6803/3581614210]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T13:23:02.962 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f4876d10-1c83-11f1-ae9f-3f8bea697626 -- ceph osd stat -f json 2026-03-10T13:23:03.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:02 vm02 ceph-mon[52534]: from='client.? 192.168.123.102:0/1074687690' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T13:23:03.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:02 vm02 ceph-mon[52534]: from='osd.0 [v2:192.168.123.102:6802/3581614210,v1:192.168.123.102:6803/3581614210]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T13:23:03.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:02 vm02 ceph-mon[52534]: osdmap e6: 1 total, 0 up, 1 in 2026-03-10T13:23:03.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:02 vm02 ceph-mon[52534]: from='osd.0 [v2:192.168.123.102:6802/3581614210,v1:192.168.123.102:6803/3581614210]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T13:23:03.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:02 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:23:03.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:02 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' 2026-03-10T13:23:03.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:02 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' 2026-03-10T13:23:03.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:02 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:23:03.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:02 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:23:03.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:02 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:23:03.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:02 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' 2026-03-10T13:23:03.138 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/mon.a/config 2026-03-10T13:23:03.402 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:23:03.495 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":7,"num_osds":1,"num_up_osds":0,"osd_up_since":0,"num_in_osds":1,"osd_in_since":1773148973,"num_remapped_pgs":0} 2026-03-10T13:23:03.813 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:03 vm02 ceph-mon[52534]: Detected new or changed devices on vm02 2026-03-10T13:23:03.813 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:03 vm02 ceph-mon[52534]: Adjusting osd_memory_target on vm02 to 257.0M 2026-03-10T13:23:03.813 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:03 vm02 ceph-mon[52534]: Unable to set osd_memory_target on vm02 to 269530726: error parsing value: Value '269530726' is below minimum 939524096 2026-03-10T13:23:03.813 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:03 vm02 ceph-mon[52534]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:23:03.813 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:03 vm02 ceph-mon[52534]: from='osd.0 [v2:192.168.123.102:6802/3581614210,v1:192.168.123.102:6803/3581614210]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-10T13:23:04.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:03 vm02 ceph-mon[52534]: osdmap e7: 1 total, 0 up, 1 in 2026-03-10T13:23:04.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:03 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:23:04.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:03 vm02 ceph-mon[52534]: from='client.? 192.168.123.102:0/2857389075' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T13:23:04.496 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f4876d10-1c83-11f1-ae9f-3f8bea697626 -- ceph osd stat -f json 2026-03-10T13:23:04.522 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 13:23:04 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-osd-0[61705]: 2026-03-10T13:23:04.196+0000 7fc3832f3640 -1 osd.0 0 waiting for initial osdmap 2026-03-10T13:23:04.522 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 13:23:04 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-osd-0[61705]: 2026-03-10T13:23:04.202+0000 7fc37e91c640 -1 osd.0 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T13:23:04.696 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/mon.a/config 2026-03-10T13:23:04.967 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:23:05.047 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":7,"num_osds":1,"num_up_osds":0,"osd_up_since":0,"num_in_osds":1,"osd_in_since":1773148973,"num_remapped_pgs":0} 2026-03-10T13:23:05.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:05 vm02 ceph-mon[52534]: purged_snaps scrub starts 2026-03-10T13:23:05.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:05 vm02 ceph-mon[52534]: purged_snaps scrub ok 2026-03-10T13:23:05.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:05 vm02 ceph-mon[52534]: from='osd.0 [v2:192.168.123.102:6802/3581614210,v1:192.168.123.102:6803/3581614210]' entity='osd.0' 2026-03-10T13:23:05.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:05 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:23:05.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:05 vm02 ceph-mon[52534]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:23:05.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:05 vm02 ceph-mon[52534]: from='client.? 192.168.123.102:0/1093925446' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T13:23:06.049 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f4876d10-1c83-11f1-ae9f-3f8bea697626 -- ceph osd stat -f json 2026-03-10T13:23:06.236 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/mon.a/config 2026-03-10T13:23:06.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:06 vm02 ceph-mon[52534]: osd.0 [v2:192.168.123.102:6802/3581614210,v1:192.168.123.102:6803/3581614210] boot 2026-03-10T13:23:06.346 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:06 vm02 ceph-mon[52534]: osdmap e8: 1 total, 1 up, 1 in 2026-03-10T13:23:06.346 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:06 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:23:06.908 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:23:07.378 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":9,"num_osds":1,"num_up_osds":1,"osd_up_since":1773148985,"num_in_osds":1,"osd_in_since":1773148973,"num_remapped_pgs":0} 2026-03-10T13:23:07.378 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f4876d10-1c83-11f1-ae9f-3f8bea697626 -- ceph osd dump --format=json 2026-03-10T13:23:07.565 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/mon.a/config 2026-03-10T13:23:07.809 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:23:07.809 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":9,"fsid":"f4876d10-1c83-11f1-ae9f-3f8bea697626","created":"2026-03-10T13:22:20.679106+0000","modified":"2026-03-10T13:23:06.288729+0000","last_up_change":"2026-03-10T13:23:05.199671+0000","last_in_change":"2026-03-10T13:22:53.824106+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":4,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":0,"max_osd":1,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[],"osds":[{"osd":0,"uuid":"796001f7-50f6-444f-a6c7-b385e4f4101d","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6802","nonce":3581614210},{"type":"v1","addr":"192.168.123.102:6803","nonce":3581614210}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6804","nonce":3581614210},{"type":"v1","addr":"192.168.123.102:6805","nonce":3581614210}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6808","nonce":3581614210},{"type":"v1","addr":"192.168.123.102:6809","nonce":3581614210}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6806","nonce":3581614210},{"type":"v1","addr":"192.168.123.102:6807","nonce":3581614210}]},"public_addr":"192.168.123.102:6803/3581614210","cluster_addr":"192.168.123.102:6805/3581614210","heartbeat_back_addr":"192.168.123.102:6809/3581614210","heartbeat_front_addr":"192.168.123.102:6807/3581614210","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T13:23:02.487171+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.102:6801/2753132491":"2026-03-11T13:22:42.610847+0000","192.168.123.102:6800/2753132491":"2026-03-11T13:22:42.610847+0000","192.168.123.102:0/702002675":"2026-03-11T13:22:42.610847+0000","192.168.123.102:0/1129127315":"2026-03-11T13:22:42.610847+0000","192.168.123.102:0/540374878":"2026-03-11T13:22:42.610847+0000","192.168.123.102:0/2634911400":"2026-03-11T13:22:31.563457+0000","192.168.123.102:6801/1971879796":"2026-03-11T13:22:31.563457+0000","192.168.123.102:6800/1971879796":"2026-03-11T13:22:31.563457+0000","192.168.123.102:0/3405480304":"2026-03-11T13:22:31.563457+0000","192.168.123.102:0/2504759893":"2026-03-11T13:22:31.563457+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T13:23:07.844 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:07 vm02 ceph-mon[52534]: pgmap v10: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:23:07.844 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:07 vm02 ceph-mon[52534]: osdmap e9: 1 total, 1 up, 1 in 2026-03-10T13:23:07.844 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:07 vm02 ceph-mon[52534]: from='client.? 192.168.123.102:0/3700144091' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T13:23:07.874 INFO:tasks.cephadm.ceph_manager.ceph:[] 2026-03-10T13:23:07.874 INFO:tasks.cephadm:Setting up client nodes... 2026-03-10T13:23:07.874 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f4876d10-1c83-11f1-ae9f-3f8bea697626 -- ceph auth get-or-create client.0 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-10T13:23:08.046 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/mon.a/config 2026-03-10T13:23:08.314 INFO:teuthology.orchestra.run.vm02.stdout:[client.0] 2026-03-10T13:23:08.314 INFO:teuthology.orchestra.run.vm02.stdout: key = AQA8G7BpbpCQEhAA0tEB9KGjfAyeNfXb30dcrw== 2026-03-10T13:23:08.366 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T13:23:08.366 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/etc/ceph/ceph.client.0.keyring 2026-03-10T13:23:08.367 DEBUG:teuthology.orchestra.run.vm02:> sudo chmod 0644 /etc/ceph/ceph.client.0.keyring 2026-03-10T13:23:08.401 INFO:tasks.ceph:Waiting until ceph daemons up and pgs clean... 2026-03-10T13:23:08.401 INFO:tasks.cephadm.ceph_manager.ceph:waiting for mgr available 2026-03-10T13:23:08.401 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f4876d10-1c83-11f1-ae9f-3f8bea697626 -- ceph mgr dump --format=json 2026-03-10T13:23:08.616 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:08 vm02 ceph-mon[52534]: from='client.? 192.168.123.102:0/1711300194' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T13:23:08.616 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:08 vm02 ceph-mon[52534]: from='client.? 192.168.123.102:0/80903852' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T13:23:08.616 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:08 vm02 ceph-mon[52534]: from='client.? 192.168.123.102:0/80903852' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T13:23:08.634 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/mon.a/config 2026-03-10T13:23:08.899 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:23:08.968 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":12,"flags":0,"active_gid":14150,"active_name":"a","active_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6800","nonce":3623619894},{"type":"v1","addr":"192.168.123.102:6801","nonce":3623619894}]},"active_addr":"192.168.123.102:6801/3623619894","active_change":"2026-03-10T13:22:42.611136+0000","active_mgr_features":4540701547738038271,"available":true,"standbys":[],"modules":["cephadm","dashboard","iostat","nfs","restful"],"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}],"services":{"dashboard":"https://192.168.123.102:8443/"},"always_on_modules":{"octopus":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"pacific":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"quincy":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"reef":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"squid":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"]},"force_disabled_modules":{},"last_failure_osd_epoch":3,"active_clients":[{"name":"libcephsqlite","addrvec":[{"type":"v2","addr":"192.168.123.102:0","nonce":2723295330}]},{"name":"rbd_support","addrvec":[{"type":"v2","addr":"192.168.123.102:0","nonce":651762368}]},{"name":"volumes","addrvec":[{"type":"v2","addr":"192.168.123.102:0","nonce":1260481702}]}]} 2026-03-10T13:23:08.969 INFO:tasks.cephadm.ceph_manager.ceph:mgr available! 2026-03-10T13:23:08.969 INFO:tasks.cephadm.ceph_manager.ceph:waiting for all up 2026-03-10T13:23:08.970 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f4876d10-1c83-11f1-ae9f-3f8bea697626 -- ceph osd dump --format=json 2026-03-10T13:23:09.155 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/mon.a/config 2026-03-10T13:23:09.398 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:23:09.398 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":9,"fsid":"f4876d10-1c83-11f1-ae9f-3f8bea697626","created":"2026-03-10T13:22:20.679106+0000","modified":"2026-03-10T13:23:06.288729+0000","last_up_change":"2026-03-10T13:23:05.199671+0000","last_in_change":"2026-03-10T13:22:53.824106+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":4,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":0,"max_osd":1,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[],"osds":[{"osd":0,"uuid":"796001f7-50f6-444f-a6c7-b385e4f4101d","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6802","nonce":3581614210},{"type":"v1","addr":"192.168.123.102:6803","nonce":3581614210}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6804","nonce":3581614210},{"type":"v1","addr":"192.168.123.102:6805","nonce":3581614210}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6808","nonce":3581614210},{"type":"v1","addr":"192.168.123.102:6809","nonce":3581614210}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6806","nonce":3581614210},{"type":"v1","addr":"192.168.123.102:6807","nonce":3581614210}]},"public_addr":"192.168.123.102:6803/3581614210","cluster_addr":"192.168.123.102:6805/3581614210","heartbeat_back_addr":"192.168.123.102:6809/3581614210","heartbeat_front_addr":"192.168.123.102:6807/3581614210","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T13:23:02.487171+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.102:6801/2753132491":"2026-03-11T13:22:42.610847+0000","192.168.123.102:6800/2753132491":"2026-03-11T13:22:42.610847+0000","192.168.123.102:0/702002675":"2026-03-11T13:22:42.610847+0000","192.168.123.102:0/1129127315":"2026-03-11T13:22:42.610847+0000","192.168.123.102:0/540374878":"2026-03-11T13:22:42.610847+0000","192.168.123.102:0/2634911400":"2026-03-11T13:22:31.563457+0000","192.168.123.102:6801/1971879796":"2026-03-11T13:22:31.563457+0000","192.168.123.102:6800/1971879796":"2026-03-11T13:22:31.563457+0000","192.168.123.102:0/3405480304":"2026-03-11T13:22:31.563457+0000","192.168.123.102:0/2504759893":"2026-03-11T13:22:31.563457+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T13:23:09.466 INFO:tasks.cephadm.ceph_manager.ceph:all up! 2026-03-10T13:23:09.466 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f4876d10-1c83-11f1-ae9f-3f8bea697626 -- ceph osd dump --format=json 2026-03-10T13:23:09.657 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/mon.a/config 2026-03-10T13:23:09.708 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:09 vm02 ceph-mon[52534]: pgmap v12: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:23:09.708 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:09 vm02 ceph-mon[52534]: from='client.? 192.168.123.102:0/4291052517' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T13:23:09.708 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:09 vm02 ceph-mon[52534]: from='client.? 192.168.123.102:0/3087588047' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T13:23:09.903 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:23:09.904 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":9,"fsid":"f4876d10-1c83-11f1-ae9f-3f8bea697626","created":"2026-03-10T13:22:20.679106+0000","modified":"2026-03-10T13:23:06.288729+0000","last_up_change":"2026-03-10T13:23:05.199671+0000","last_in_change":"2026-03-10T13:22:53.824106+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":4,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":0,"max_osd":1,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[],"osds":[{"osd":0,"uuid":"796001f7-50f6-444f-a6c7-b385e4f4101d","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6802","nonce":3581614210},{"type":"v1","addr":"192.168.123.102:6803","nonce":3581614210}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6804","nonce":3581614210},{"type":"v1","addr":"192.168.123.102:6805","nonce":3581614210}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6808","nonce":3581614210},{"type":"v1","addr":"192.168.123.102:6809","nonce":3581614210}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6806","nonce":3581614210},{"type":"v1","addr":"192.168.123.102:6807","nonce":3581614210}]},"public_addr":"192.168.123.102:6803/3581614210","cluster_addr":"192.168.123.102:6805/3581614210","heartbeat_back_addr":"192.168.123.102:6809/3581614210","heartbeat_front_addr":"192.168.123.102:6807/3581614210","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T13:23:02.487171+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.102:6801/2753132491":"2026-03-11T13:22:42.610847+0000","192.168.123.102:6800/2753132491":"2026-03-11T13:22:42.610847+0000","192.168.123.102:0/702002675":"2026-03-11T13:22:42.610847+0000","192.168.123.102:0/1129127315":"2026-03-11T13:22:42.610847+0000","192.168.123.102:0/540374878":"2026-03-11T13:22:42.610847+0000","192.168.123.102:0/2634911400":"2026-03-11T13:22:31.563457+0000","192.168.123.102:6801/1971879796":"2026-03-11T13:22:31.563457+0000","192.168.123.102:6800/1971879796":"2026-03-11T13:22:31.563457+0000","192.168.123.102:0/3405480304":"2026-03-11T13:22:31.563457+0000","192.168.123.102:0/2504759893":"2026-03-11T13:22:31.563457+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T13:23:09.949 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f4876d10-1c83-11f1-ae9f-3f8bea697626 -- ceph tell osd.0 flush_pg_stats 2026-03-10T13:23:10.133 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/mon.a/config 2026-03-10T13:23:10.329 INFO:teuthology.orchestra.run.vm02.stdout:34359738371 2026-03-10T13:23:10.330 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f4876d10-1c83-11f1-ae9f-3f8bea697626 -- ceph osd last-stat-seq osd.0 2026-03-10T13:23:10.506 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/mon.a/config 2026-03-10T13:23:10.733 INFO:teuthology.orchestra.run.vm02.stdout:34359738369 2026-03-10T13:23:10.787 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:10 vm02 ceph-mon[52534]: from='client.? 192.168.123.102:0/1325777963' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T13:23:10.803 INFO:tasks.cephadm.ceph_manager.ceph:need seq 34359738371 got 34359738369 for osd.0 2026-03-10T13:23:11.804 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f4876d10-1c83-11f1-ae9f-3f8bea697626 -- ceph osd last-stat-seq osd.0 2026-03-10T13:23:11.983 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/mon.a/config 2026-03-10T13:23:12.008 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:11 vm02 ceph-mon[52534]: pgmap v13: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:23:12.008 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:11 vm02 ceph-mon[52534]: from='client.? 192.168.123.102:0/4063859339' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T13:23:12.213 INFO:teuthology.orchestra.run.vm02.stdout:34359738371 2026-03-10T13:23:12.291 INFO:tasks.cephadm.ceph_manager.ceph:need seq 34359738371 got 34359738371 for osd.0 2026-03-10T13:23:12.291 DEBUG:teuthology.parallel:result is None 2026-03-10T13:23:12.291 INFO:tasks.cephadm.ceph_manager.ceph:waiting for clean 2026-03-10T13:23:12.291 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f4876d10-1c83-11f1-ae9f-3f8bea697626 -- ceph pg dump --format=json 2026-03-10T13:23:12.468 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/mon.a/config 2026-03-10T13:23:12.698 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:23:12.698 INFO:teuthology.orchestra.run.vm02.stderr:dumped all 2026-03-10T13:23:12.750 INFO:teuthology.orchestra.run.vm02.stdout:{"pg_ready":true,"pg_map":{"version":14,"stamp":"2026-03-10T13:23:12.625143+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":436520,"kb_used_data":80,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20530904,"statfs":{"total":21470642176,"available":21023645696,"internally_reserved":0,"allocated":81920,"data_stored":16970,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"0.000000"},"pg_stats":[],"pool_stats":[],"osd_stats":[{"osd":0,"up_from":8,"seq":34359738371,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":436520,"kb_used_data":80,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20530904,"statfs":{"total":21470642176,"available":21023645696,"internally_reserved":0,"allocated":81920,"data_stored":16970,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[]}} 2026-03-10T13:23:12.750 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f4876d10-1c83-11f1-ae9f-3f8bea697626 -- ceph pg dump --format=json 2026-03-10T13:23:12.774 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:12 vm02 ceph-mon[52534]: from='client.? 192.168.123.102:0/3783398735' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T13:23:12.926 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/mon.a/config 2026-03-10T13:23:13.146 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:23:13.146 INFO:teuthology.orchestra.run.vm02.stderr:dumped all 2026-03-10T13:23:13.218 INFO:teuthology.orchestra.run.vm02.stdout:{"pg_ready":true,"pg_map":{"version":14,"stamp":"2026-03-10T13:23:12.625143+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":436520,"kb_used_data":80,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20530904,"statfs":{"total":21470642176,"available":21023645696,"internally_reserved":0,"allocated":81920,"data_stored":16970,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"0.000000"},"pg_stats":[],"pool_stats":[],"osd_stats":[{"osd":0,"up_from":8,"seq":34359738371,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":436520,"kb_used_data":80,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20530904,"statfs":{"total":21470642176,"available":21023645696,"internally_reserved":0,"allocated":81920,"data_stored":16970,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[]}} 2026-03-10T13:23:13.218 INFO:tasks.cephadm.ceph_manager.ceph:clean! 2026-03-10T13:23:13.218 INFO:tasks.ceph:Waiting until ceph cluster ceph is healthy... 2026-03-10T13:23:13.218 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy 2026-03-10T13:23:13.218 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f4876d10-1c83-11f1-ae9f-3f8bea697626 -- ceph health --format=json 2026-03-10T13:23:13.438 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/mon.a/config 2026-03-10T13:23:13.684 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:23:13.684 INFO:teuthology.orchestra.run.vm02.stdout:{"status":"HEALTH_OK","checks":{},"mutes":[]} 2026-03-10T13:23:13.728 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:13 vm02 ceph-mon[52534]: pgmap v14: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:23:13.728 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:13 vm02 ceph-mon[52534]: from='client.14217 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:23:13.734 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy done 2026-03-10T13:23:13.734 INFO:tasks.cephadm:Setup complete, yielding 2026-03-10T13:23:13.734 INFO:teuthology.run_tasks:Running task workunit... 2026-03-10T13:23:13.738 INFO:tasks.workunit:Pulling workunits from ref 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b 2026-03-10T13:23:13.738 INFO:tasks.workunit:Making a separate scratch dir for every client... 2026-03-10T13:23:13.738 DEBUG:teuthology.orchestra.run.vm02:> stat -- /home/ubuntu/cephtest/mnt.0 2026-03-10T13:23:13.754 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T13:23:13.755 INFO:teuthology.orchestra.run.vm02.stderr:stat: cannot statx '/home/ubuntu/cephtest/mnt.0': No such file or directory 2026-03-10T13:23:13.755 DEBUG:teuthology.orchestra.run.vm02:> mkdir -- /home/ubuntu/cephtest/mnt.0 2026-03-10T13:23:13.811 INFO:tasks.workunit:Created dir /home/ubuntu/cephtest/mnt.0 2026-03-10T13:23:13.811 DEBUG:teuthology.orchestra.run.vm02:> cd -- /home/ubuntu/cephtest/mnt.0 && mkdir -- client.0 2026-03-10T13:23:13.867 INFO:tasks.workunit:timeout=3h 2026-03-10T13:23:13.867 INFO:tasks.workunit:cleanup=True 2026-03-10T13:23:13.867 DEBUG:teuthology.orchestra.run.vm02:> rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone https://github.com/kshtsk/ceph.git /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0 && git checkout 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b 2026-03-10T13:23:13.924 INFO:tasks.workunit.client.0.vm02.stderr:Cloning into '/home/ubuntu/cephtest/clone.client.0'... 2026-03-10T13:23:15.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:14 vm02 ceph-mon[52534]: from='client.14219 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:23:15.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:14 vm02 ceph-mon[52534]: from='client.? 192.168.123.102:0/2830801725' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T13:23:16.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:15 vm02 ceph-mon[52534]: pgmap v15: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:23:18.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:17 vm02 ceph-mon[52534]: pgmap v16: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:23:20.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:19 vm02 ceph-mon[52534]: pgmap v17: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:23:22.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:21 vm02 ceph-mon[52534]: pgmap v18: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:23:24.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:23 vm02 ceph-mon[52534]: pgmap v19: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:23:26.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:25 vm02 ceph-mon[52534]: pgmap v20: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:23:28.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:27 vm02 ceph-mon[52534]: pgmap v21: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:23:30.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:29 vm02 ceph-mon[52534]: pgmap v22: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:23:32.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:31 vm02 ceph-mon[52534]: pgmap v23: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:23:33.594 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:33 vm02 ceph-mon[52534]: pgmap v24: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:23:36.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:35 vm02 ceph-mon[52534]: pgmap v25: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:23:38.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:37 vm02 ceph-mon[52534]: pgmap v26: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:23:40.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:39 vm02 ceph-mon[52534]: pgmap v27: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:23:42.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:41 vm02 ceph-mon[52534]: pgmap v28: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:23:44.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:43 vm02 ceph-mon[52534]: pgmap v29: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:23:46.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:45 vm02 ceph-mon[52534]: pgmap v30: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:23:48.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:47 vm02 ceph-mon[52534]: pgmap v31: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:23:50.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:49 vm02 ceph-mon[52534]: pgmap v32: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:23:52.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:51 vm02 ceph-mon[52534]: pgmap v33: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:23:54.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:53 vm02 ceph-mon[52534]: pgmap v34: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:23:56.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:55 vm02 ceph-mon[52534]: pgmap v35: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:23:58.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:57 vm02 ceph-mon[52534]: pgmap v36: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:24:00.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:23:59 vm02 ceph-mon[52534]: pgmap v37: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:24:00.691 INFO:tasks.workunit.client.0.vm02.stderr:Note: switching to '75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b'. 2026-03-10T13:24:00.691 INFO:tasks.workunit.client.0.vm02.stderr: 2026-03-10T13:24:00.691 INFO:tasks.workunit.client.0.vm02.stderr:You are in 'detached HEAD' state. You can look around, make experimental 2026-03-10T13:24:00.691 INFO:tasks.workunit.client.0.vm02.stderr:changes and commit them, and you can discard any commits you make in this 2026-03-10T13:24:00.691 INFO:tasks.workunit.client.0.vm02.stderr:state without impacting any branches by switching back to a branch. 2026-03-10T13:24:00.691 INFO:tasks.workunit.client.0.vm02.stderr: 2026-03-10T13:24:00.691 INFO:tasks.workunit.client.0.vm02.stderr:If you want to create a new branch to retain commits you create, you may 2026-03-10T13:24:00.691 INFO:tasks.workunit.client.0.vm02.stderr:do so (now or later) by using -c with the switch command. Example: 2026-03-10T13:24:00.691 INFO:tasks.workunit.client.0.vm02.stderr: 2026-03-10T13:24:00.691 INFO:tasks.workunit.client.0.vm02.stderr: git switch -c 2026-03-10T13:24:00.691 INFO:tasks.workunit.client.0.vm02.stderr: 2026-03-10T13:24:00.691 INFO:tasks.workunit.client.0.vm02.stderr:Or undo this operation with: 2026-03-10T13:24:00.691 INFO:tasks.workunit.client.0.vm02.stderr: 2026-03-10T13:24:00.691 INFO:tasks.workunit.client.0.vm02.stderr: git switch - 2026-03-10T13:24:00.691 INFO:tasks.workunit.client.0.vm02.stderr: 2026-03-10T13:24:00.691 INFO:tasks.workunit.client.0.vm02.stderr:Turn off this advice by setting config variable advice.detachedHead to false 2026-03-10T13:24:00.691 INFO:tasks.workunit.client.0.vm02.stderr: 2026-03-10T13:24:00.691 INFO:tasks.workunit.client.0.vm02.stderr:HEAD is now at 75a68fd8ca3 qa/suites/orch/cephadm/osds: drop nvme_loop task 2026-03-10T13:24:00.696 DEBUG:teuthology.orchestra.run.vm02:> cd -- /home/ubuntu/cephtest/clone.client.0/qa/workunits && if test -e Makefile ; then make ; fi && find -executable -type f -printf '%P\0' >/home/ubuntu/cephtest/workunits.list.client.0 2026-03-10T13:24:00.755 INFO:tasks.workunit.client.0.vm02.stdout:for d in direct_io fs ; do ( cd $d ; make all ) ; done 2026-03-10T13:24:00.758 INFO:tasks.workunit.client.0.vm02.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/direct_io' 2026-03-10T13:24:00.758 INFO:tasks.workunit.client.0.vm02.stdout:cc -Wall -Wextra -D_GNU_SOURCE direct_io_test.c -o direct_io_test 2026-03-10T13:24:00.803 INFO:tasks.workunit.client.0.vm02.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_sync_io.c -o test_sync_io 2026-03-10T13:24:00.842 INFO:tasks.workunit.client.0.vm02.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_short_dio_read.c -o test_short_dio_read 2026-03-10T13:24:00.876 INFO:tasks.workunit.client.0.vm02.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/direct_io' 2026-03-10T13:24:00.877 INFO:tasks.workunit.client.0.vm02.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/fs' 2026-03-10T13:24:00.877 INFO:tasks.workunit.client.0.vm02.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_o_trunc.c -o test_o_trunc 2026-03-10T13:24:00.909 INFO:tasks.workunit.client.0.vm02.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/fs' 2026-03-10T13:24:00.913 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T13:24:00.913 DEBUG:teuthology.orchestra.run.vm02:> dd if=/home/ubuntu/cephtest/workunits.list.client.0 of=/dev/stdout 2026-03-10T13:24:00.973 INFO:tasks.workunit:Running workunits matching cephadm/test_cephadm_timeout.py on client.0... 2026-03-10T13:24:00.974 INFO:tasks.workunit:Running workunit cephadm/test_cephadm_timeout.py... 2026-03-10T13:24:00.974 DEBUG:teuthology.orchestra.run.vm02:workunit test cephadm/test_cephadm_timeout.py> mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm_timeout.py 2026-03-10T13:24:01.233 INFO:tasks.workunit.client.0.vm02.stderr:Inferring fsid f4876d10-1c83-11f1-ae9f-3f8bea697626 2026-03-10T13:24:01.283 INFO:tasks.workunit.client.0.vm02.stderr:Inferring config /var/lib/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/mon.a/config 2026-03-10T13:24:01.385 INFO:tasks.workunit.client.0.vm02.stderr:Using ceph image with id '654f31e6858e' and tag 'e911bdebe5c8faa3800735d1568fcdca65db60df' created on 2026-02-25 18:57:17 +0000 UTC 2026-03-10T13:24:01.385 INFO:tasks.workunit.client.0.vm02.stderr:quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T13:24:01.864 INFO:tasks.workunit.client.0.vm02.stderr:Inferring fsid f4876d10-1c83-11f1-ae9f-3f8bea697626 2026-03-10T13:24:01.934 INFO:tasks.workunit.client.0.vm02.stderr:Inferring config /var/lib/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/mon.a/config 2026-03-10T13:24:01.987 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:24:01 vm02 ceph-mon[52534]: pgmap v38: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:24:01.987 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:24:01 vm02 ceph-mon[52534]: from='client.? 192.168.123.102:0/3864321797' entity='client.admin' 2026-03-10T13:24:01.987 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:24:01 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:24:01.987 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:24:01 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:24:01.987 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:24:01 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:24:01.988 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:24:01 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' 2026-03-10T13:24:02.028 INFO:tasks.workunit.client.0.vm02.stderr:Using ceph image with id '654f31e6858e' and tag 'e911bdebe5c8faa3800735d1568fcdca65db60df' created on 2026-02-25 18:57:17 +0000 UTC 2026-03-10T13:24:02.028 INFO:tasks.workunit.client.0.vm02.stderr:quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T13:24:02.268 INFO:tasks.workunit.client.0.vm02.stdout:HOST PATH TYPE DEVICE ID SIZE AVAILABLE REFRESHED REJECT REASONS 2026-03-10T13:24:02.268 INFO:tasks.workunit.client.0.vm02.stdout:vm02 /dev/sr0 hdd QEMU_DVD-ROM_QM00003 366k No 59s ago Has a FileSystem, Insufficient space (<5GB) 2026-03-10T13:24:02.268 INFO:tasks.workunit.client.0.vm02.stdout:vm02 /dev/vdb hdd DWNBRSTVMM02001 20.0G Yes 59s ago 2026-03-10T13:24:02.268 INFO:tasks.workunit.client.0.vm02.stdout:vm02 /dev/vdc hdd DWNBRSTVMM02002 20.0G Yes 59s ago 2026-03-10T13:24:02.268 INFO:tasks.workunit.client.0.vm02.stdout:vm02 /dev/vdd hdd DWNBRSTVMM02003 20.0G Yes 59s ago 2026-03-10T13:24:02.268 INFO:tasks.workunit.client.0.vm02.stdout:vm02 /dev/vde hdd DWNBRSTVMM02004 20.0G No 59s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T13:24:02.989 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:24:02 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:24:02.990 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:24:02 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' 2026-03-10T13:24:02.990 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:24:02 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' 2026-03-10T13:24:04.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:24:03 vm02 ceph-mon[52534]: from='client.14225 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "refresh": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:24:04.345 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:24:03 vm02 ceph-mon[52534]: pgmap v39: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:24:06.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:24:05 vm02 ceph-mon[52534]: pgmap v40: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:24:08.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:24:07 vm02 ceph-mon[52534]: pgmap v41: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:24:10.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:24:09 vm02 ceph-mon[52534]: pgmap v42: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:24:12.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:24:11 vm02 ceph-mon[52534]: pgmap v43: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:24:14.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:24:14 vm02 ceph-mon[52534]: pgmap v44: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:24:16.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:24:16 vm02 ceph-mon[52534]: pgmap v45: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:24:18.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:24:18 vm02 ceph-mon[52534]: pgmap v46: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:24:20.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:24:20 vm02 ceph-mon[52534]: pgmap v47: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:24:22.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:24:22 vm02 ceph-mon[52534]: pgmap v48: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:24:24.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:24:24 vm02 ceph-mon[52534]: pgmap v49: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:24:26.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:24:26 vm02 ceph-mon[52534]: pgmap v50: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:24:28.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:24:28 vm02 ceph-mon[52534]: pgmap v51: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:24:30.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:24:30 vm02 ceph-mon[52534]: pgmap v52: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:24:32.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:24:32 vm02 ceph-mon[52534]: pgmap v53: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:24:34.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:24:34 vm02 ceph-mon[52534]: pgmap v54: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:24:35.844 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:24:35 vm02 ceph-mon[52534]: pgmap v55: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:24:38.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:24:37 vm02 ceph-mon[52534]: pgmap v56: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:24:40.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:24:39 vm02 ceph-mon[52534]: pgmap v57: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:24:42.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:24:41 vm02 ceph-mon[52534]: pgmap v58: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:24:44.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:24:43 vm02 ceph-mon[52534]: pgmap v59: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:24:46.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:24:45 vm02 ceph-mon[52534]: pgmap v60: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:24:48.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:24:47 vm02 ceph-mon[52534]: pgmap v61: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:24:50.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:24:49 vm02 ceph-mon[52534]: pgmap v62: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:24:52.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:24:51 vm02 ceph-mon[52534]: pgmap v63: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:24:54.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:24:53 vm02 ceph-mon[52534]: pgmap v64: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:24:56.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:24:55 vm02 ceph-mon[52534]: pgmap v65: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:24:58.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:24:57 vm02 ceph-mon[52534]: pgmap v66: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:25:00.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:24:59 vm02 ceph-mon[52534]: pgmap v67: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:25:02.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:25:01 vm02 ceph-mon[52534]: pgmap v68: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:25:04.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:25:03 vm02 ceph-mon[52534]: pgmap v69: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:25:06.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:25:05 vm02 ceph-mon[52534]: pgmap v70: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:25:08.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:25:08 vm02 ceph-mon[52534]: pgmap v71: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:25:10.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:25:10 vm02 ceph-mon[52534]: pgmap v72: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:25:12.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:25:12 vm02 ceph-mon[52534]: pgmap v73: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:25:14.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:25:14 vm02 ceph-mon[52534]: pgmap v74: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:25:16.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:25:16 vm02 ceph-mon[52534]: pgmap v75: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:25:18.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:25:18 vm02 ceph-mon[52534]: pgmap v76: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:25:20.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:25:20 vm02 ceph-mon[52534]: pgmap v77: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:25:22.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:25:22 vm02 ceph-mon[52534]: pgmap v78: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:25:24.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:25:24 vm02 ceph-mon[52534]: pgmap v79: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:25:26.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:25:26 vm02 ceph-mon[52534]: pgmap v80: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:25:28.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:25:28 vm02 ceph-mon[52534]: pgmap v81: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:25:30.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:25:30 vm02 ceph-mon[52534]: pgmap v82: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:25:32.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:25:32 vm02 ceph-mon[52534]: pgmap v83: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:25:34.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:25:34 vm02 ceph-mon[52534]: pgmap v84: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:25:36.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:25:36 vm02 ceph-mon[52534]: pgmap v85: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:25:38.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:25:38 vm02 ceph-mon[52534]: pgmap v86: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:25:40.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:25:40 vm02 ceph-mon[52534]: pgmap v87: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:25:42.594 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:25:42 vm02 ceph-mon[52534]: pgmap v88: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:25:43.594 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:25:43 vm02 ceph-mon[52534]: pgmap v89: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:25:46.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:25:45 vm02 ceph-mon[52534]: pgmap v90: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:25:48.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:25:47 vm02 ceph-mon[52534]: pgmap v91: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:25:50.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:25:49 vm02 ceph-mon[52534]: pgmap v92: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:25:52.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:25:51 vm02 ceph-mon[52534]: pgmap v93: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:25:54.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:25:53 vm02 ceph-mon[52534]: pgmap v94: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:25:56.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:25:55 vm02 ceph-mon[52534]: pgmap v95: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:25:58.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:25:57 vm02 ceph-mon[52534]: pgmap v96: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:26:00.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:25:59 vm02 ceph-mon[52534]: pgmap v97: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:26:02.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:26:01 vm02 ceph-mon[52534]: pgmap v98: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:26:03.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:26:02 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:26:03.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:26:02 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:26:03.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:26:02 vm02 ceph-mon[52534]: from='mgr.14150 192.168.123.102:0/4055734390' entity='mgr.a' 2026-03-10T13:26:04.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:26:03 vm02 ceph-mon[52534]: pgmap v99: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:26:04.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:26:03 vm02 ceph-mon[52534]: Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED) 2026-03-10T13:26:06.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:26:05 vm02 ceph-mon[52534]: pgmap v100: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:26:08.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:26:07 vm02 ceph-mon[52534]: pgmap v101: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:26:10.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:26:09 vm02 ceph-mon[52534]: pgmap v102: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:26:12.094 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:26:11 vm02 ceph-mon[52534]: pgmap v103: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:26:14.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:26:13 vm02 ceph-mon[52534]: pgmap v104: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:26:16.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:26:15 vm02 ceph-mon[52534]: pgmap v105: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:26:18.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:26:17 vm02 ceph-mon[52534]: pgmap v106: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:26:20.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:26:19 vm02 ceph-mon[52534]: pgmap v107: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:26:22.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:26:21 vm02 ceph-mon[52534]: pgmap v108: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:26:24.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:26:23 vm02 ceph-mon[52534]: pgmap v109: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:26:26.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:26:25 vm02 ceph-mon[52534]: pgmap v110: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:26:28.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:26:27 vm02 ceph-mon[52534]: pgmap v111: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:26:30.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:26:29 vm02 ceph-mon[52534]: pgmap v112: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:26:32.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:26:31 vm02 ceph-mon[52534]: pgmap v113: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:26:32.993 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:26:32 vm02 ceph-mon[52534]: from='client.? 192.168.123.102:0/1598671896' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T13:26:33.025 INFO:tasks.workunit.client.0.vm02.stdout:Looking for cluster fsid... 2026-03-10T13:26:33.025 INFO:tasks.workunit.client.0.vm02.stdout:Found fsid f4876d10-1c83-11f1-ae9f-3f8bea697626 2026-03-10T13:26:33.025 INFO:tasks.workunit.client.0.vm02.stdout:Setting cephadm command timeout to 120... 2026-03-10T13:26:33.025 INFO:tasks.workunit.client.0.vm02.stdout:Taking hold of cephadm lock for 300 seconds... 2026-03-10T13:26:33.025 INFO:tasks.workunit.client.0.vm02.stdout:Triggering cephadm device refresh... 2026-03-10T13:26:33.025 INFO:tasks.workunit.client.0.vm02.stdout:Sleeping 150 seconds to allow for timeout to occur... 2026-03-10T13:26:33.025 INFO:tasks.workunit.client.0.vm02.stdout:Checking ceph health detail... 2026-03-10T13:26:33.025 INFO:tasks.workunit.client.0.vm02.stdout:"cephadm shell -- ceph health detail" stdout: 2026-03-10T13:26:33.025 INFO:tasks.workunit.client.0.vm02.stdout:HEALTH_WARN failed to probe daemons or devices 2026-03-10T13:26:33.025 INFO:tasks.workunit.client.0.vm02.stdout:[WRN] CEPHADM_REFRESH_FAILED: failed to probe daemons or devices 2026-03-10T13:26:33.025 INFO:tasks.workunit.client.0.vm02.stdout: Command "cephadm ceph-volume -- inventory" timed out on host vm02 (default 120 second timeout) 2026-03-10T13:26:33.025 INFO:tasks.workunit.client.0.vm02.stdout: 2026-03-10T13:26:33.025 INFO:tasks.workunit.client.0.vm02.stdout:"cephadm shell -- ceph health detail" stderr: 2026-03-10T13:26:33.025 INFO:tasks.workunit.client.0.vm02.stdout:Inferring fsid f4876d10-1c83-11f1-ae9f-3f8bea697626 2026-03-10T13:26:33.025 INFO:tasks.workunit.client.0.vm02.stdout:Inferring config /var/lib/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/mon.a/config 2026-03-10T13:26:33.025 INFO:tasks.workunit.client.0.vm02.stdout:Using ceph image with id '654f31e6858e' and tag 'e911bdebe5c8faa3800735d1568fcdca65db60df' created on 2026-02-25 18:57:17 +0000 UTC 2026-03-10T13:26:33.025 INFO:tasks.workunit.client.0.vm02.stdout:quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T13:26:33.025 INFO:tasks.workunit.client.0.vm02.stdout: 2026-03-10T13:26:33.025 INFO:tasks.workunit.client.0.vm02.stdout:Checking for correct health warning in health detail... 2026-03-10T13:26:33.025 INFO:tasks.workunit.client.0.vm02.stdout:Health warnings found succesfully. Exiting. 2026-03-10T13:26:33.029 INFO:teuthology.orchestra.run:Running command with timeout 3600 2026-03-10T13:26:33.029 DEBUG:teuthology.orchestra.run.vm02:> sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0/tmp 2026-03-10T13:26:33.057 INFO:tasks.workunit:Stopping ['cephadm/test_cephadm_timeout.py'] on client.0... 2026-03-10T13:26:33.057 DEBUG:teuthology.orchestra.run.vm02:> sudo rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/clone.client.0 2026-03-10T13:26:33.503 DEBUG:teuthology.parallel:result is None 2026-03-10T13:26:33.503 DEBUG:teuthology.orchestra.run.vm02:> sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0 2026-03-10T13:26:33.543 INFO:tasks.workunit:Deleted dir /home/ubuntu/cephtest/mnt.0/client.0 2026-03-10T13:26:33.543 DEBUG:teuthology.orchestra.run.vm02:> rmdir -- /home/ubuntu/cephtest/mnt.0 2026-03-10T13:26:33.604 INFO:tasks.workunit:Deleted artificial mount point /home/ubuntu/cephtest/mnt.0/client.0 2026-03-10T13:26:33.604 DEBUG:teuthology.run_tasks:Unwinding manager cephadm 2026-03-10T13:26:33.607 INFO:tasks.cephadm:Teardown begin 2026-03-10T13:26:33.607 DEBUG:teuthology.orchestra.run.vm02:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T13:26:33.683 INFO:tasks.cephadm:Disabling cephadm mgr module 2026-03-10T13:26:33.683 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f4876d10-1c83-11f1-ae9f-3f8bea697626 -- ceph mgr module disable cephadm 2026-03-10T13:26:33.905 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/mon.a/config 2026-03-10T13:26:33.923 INFO:teuthology.orchestra.run.vm02.stderr:Error: statfs /etc/ceph/ceph.client.admin.keyring: no such file or directory 2026-03-10T13:26:33.949 DEBUG:teuthology.orchestra.run:got remote process result: 125 2026-03-10T13:26:33.949 INFO:tasks.cephadm:Cleaning up testdir ceph.* files... 2026-03-10T13:26:33.949 DEBUG:teuthology.orchestra.run.vm02:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-10T13:26:33.968 INFO:tasks.cephadm:Stopping all daemons... 2026-03-10T13:26:33.968 INFO:tasks.cephadm.mon.a:Stopping mon.a... 2026-03-10T13:26:33.968 DEBUG:teuthology.orchestra.run.vm02:> sudo systemctl stop ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626@mon.a 2026-03-10T13:26:34.033 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:26:33 vm02 ceph-mon[52534]: pgmap v114: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:26:34.245 DEBUG:teuthology.orchestra.run.vm02:> sudo pkill -f 'journalctl -f -n 0 -u ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626@mon.a.service' 2026-03-10T13:26:34.310 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:26:34 vm02 systemd[1]: Stopping Ceph mon.a for f4876d10-1c83-11f1-ae9f-3f8bea697626... 2026-03-10T13:26:34.310 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:26:34 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mon-a[52530]: 2026-03-10T13:26:34.114+0000 7f70b22cf640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T13:26:34.310 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:26:34 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mon-a[52530]: 2026-03-10T13:26:34.114+0000 7f70b22cf640 -1 mon.a@0(leader) e1 *** Got Signal Terminated *** 2026-03-10T13:26:34.310 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:26:34 vm02 podman[69213]: 2026-03-10 13:26:34.16701506 +0000 UTC m=+0.066253388 container died 6dbf608920d670372c597edba97d9884f9555972b6725f88da7aca2c1ed6d03e (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mon-a, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2) 2026-03-10T13:26:34.310 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:26:34 vm02 podman[69213]: 2026-03-10 13:26:34.183901197 +0000 UTC m=+0.083139525 container remove 6dbf608920d670372c597edba97d9884f9555972b6725f88da7aca2c1ed6d03e (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mon-a, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2) 2026-03-10T13:26:34.310 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:26:34 vm02 bash[69213]: ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mon-a 2026-03-10T13:26:34.310 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:26:34 vm02 systemd[1]: ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626@mon.a.service: Deactivated successfully. 2026-03-10T13:26:34.310 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:26:34 vm02 systemd[1]: Stopped Ceph mon.a for f4876d10-1c83-11f1-ae9f-3f8bea697626. 2026-03-10T13:26:34.310 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 13:26:34 vm02 systemd[1]: ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626@mon.a.service: Consumed 1.791s CPU time. 2026-03-10T13:26:34.322 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T13:26:34.322 INFO:tasks.cephadm.mon.a:Stopped mon.a 2026-03-10T13:26:34.322 INFO:tasks.cephadm.mgr.a:Stopping mgr.a... 2026-03-10T13:26:34.322 DEBUG:teuthology.orchestra.run.vm02:> sudo systemctl stop ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626@mgr.a 2026-03-10T13:26:34.574 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:26:34 vm02 systemd[1]: Stopping Ceph mgr.a for f4876d10-1c83-11f1-ae9f-3f8bea697626... 2026-03-10T13:26:34.574 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:26:34 vm02 podman[69316]: 2026-03-10 13:26:34.495756082 +0000 UTC m=+0.046782104 container died 73f7e11261b700e6e35dd912f1b32e0d9523255d92c9c32d8ffcc0583a2660fd (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid) 2026-03-10T13:26:34.574 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:26:34 vm02 podman[69316]: 2026-03-10 13:26:34.518025333 +0000 UTC m=+0.069051355 container remove 73f7e11261b700e6e35dd912f1b32e0d9523255d92c9c32d8ffcc0583a2660fd (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-10T13:26:34.574 INFO:journalctl@ceph.mgr.a.vm02.stdout:Mar 10 13:26:34 vm02 bash[69316]: ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-mgr-a 2026-03-10T13:26:34.582 DEBUG:teuthology.orchestra.run.vm02:> sudo pkill -f 'journalctl -f -n 0 -u ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626@mgr.a.service' 2026-03-10T13:26:34.613 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T13:26:34.613 INFO:tasks.cephadm.mgr.a:Stopped mgr.a 2026-03-10T13:26:34.613 INFO:tasks.cephadm.osd.0:Stopping osd.0... 2026-03-10T13:26:34.613 DEBUG:teuthology.orchestra.run.vm02:> sudo systemctl stop ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626@osd.0 2026-03-10T13:26:34.844 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 13:26:34 vm02 systemd[1]: Stopping Ceph osd.0 for f4876d10-1c83-11f1-ae9f-3f8bea697626... 2026-03-10T13:26:34.844 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 13:26:34 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-osd-0[61705]: 2026-03-10T13:26:34.753+0000 7fc384307640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T13:26:34.844 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 13:26:34 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-osd-0[61705]: 2026-03-10T13:26:34.753+0000 7fc384307640 -1 osd.0 9 *** Got signal Terminated *** 2026-03-10T13:26:34.844 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 13:26:34 vm02 ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-osd-0[61705]: 2026-03-10T13:26:34.753+0000 7fc384307640 -1 osd.0 9 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T13:26:40.051 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 13:26:39 vm02 podman[69419]: 2026-03-10 13:26:39.790578598 +0000 UTC m=+5.051449141 container died 074da96501a9ce0ddd4886518cbb615cd67e81fefcaea4c70d0e18c56d010e16 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-osd-0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, CEPH_REF=squid, OSD_FLAVOR=default) 2026-03-10T13:26:40.051 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 13:26:39 vm02 podman[69419]: 2026-03-10 13:26:39.834674023 +0000 UTC m=+5.095544566 container remove 074da96501a9ce0ddd4886518cbb615cd67e81fefcaea4c70d0e18c56d010e16 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-osd-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid) 2026-03-10T13:26:40.051 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 13:26:39 vm02 bash[69419]: ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-osd-0 2026-03-10T13:26:40.051 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 13:26:39 vm02 podman[69489]: 2026-03-10 13:26:39.962643986 +0000 UTC m=+0.019966222 container create c61e847255f4c1d9145a3a2813351b9e068d441a946798b36ef14900e8310be2 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-osd-0-deactivate, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T13:26:40.051 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 13:26:40 vm02 podman[69489]: 2026-03-10 13:26:40.012735003 +0000 UTC m=+0.070057249 container init c61e847255f4c1d9145a3a2813351b9e068d441a946798b36ef14900e8310be2 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-osd-0-deactivate, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid) 2026-03-10T13:26:40.051 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 13:26:40 vm02 podman[69489]: 2026-03-10 13:26:40.015369032 +0000 UTC m=+0.072691268 container start c61e847255f4c1d9145a3a2813351b9e068d441a946798b36ef14900e8310be2 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-osd-0-deactivate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-10T13:26:40.051 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 13:26:40 vm02 podman[69489]: 2026-03-10 13:26:40.0206837 +0000 UTC m=+0.078005936 container attach c61e847255f4c1d9145a3a2813351b9e068d441a946798b36ef14900e8310be2 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626-osd-0-deactivate, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3) 2026-03-10T13:26:40.165 DEBUG:teuthology.orchestra.run.vm02:> sudo pkill -f 'journalctl -f -n 0 -u ceph-f4876d10-1c83-11f1-ae9f-3f8bea697626@osd.0.service' 2026-03-10T13:26:40.198 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T13:26:40.198 INFO:tasks.cephadm.osd.0:Stopped osd.0 2026-03-10T13:26:40.198 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid f4876d10-1c83-11f1-ae9f-3f8bea697626 --force --keep-logs 2026-03-10T13:26:40.365 INFO:teuthology.orchestra.run.vm02.stdout:Deleting cluster with fsid: f4876d10-1c83-11f1-ae9f-3f8bea697626 2026-03-10T13:26:41.317 DEBUG:teuthology.orchestra.run.vm02:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T13:26:41.347 INFO:tasks.cephadm:Archiving crash dumps... 2026-03-10T13:26:41.347 DEBUG:teuthology.misc:Transferring archived files from vm02:/var/lib/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/crash to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1045/remote/vm02/crash 2026-03-10T13:26:41.347 DEBUG:teuthology.orchestra.run.vm02:> sudo tar c -f - -C /var/lib/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/crash -- . 2026-03-10T13:26:41.414 INFO:teuthology.orchestra.run.vm02.stderr:tar: /var/lib/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/crash: Cannot open: No such file or directory 2026-03-10T13:26:41.414 INFO:teuthology.orchestra.run.vm02.stderr:tar: Error is not recoverable: exiting now 2026-03-10T13:26:41.415 INFO:tasks.cephadm:Checking cluster log for badness... 2026-03-10T13:26:41.415 DEBUG:teuthology.orchestra.run.vm02:> sudo egrep '\[ERR\]|\[WRN\]|\[SEC\]' /var/log/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/ceph.log | egrep CEPHADM_ | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v CEPHADM_REFRESH_FAILED | head -n 1 2026-03-10T13:26:41.487 INFO:tasks.cephadm:Compressing logs... 2026-03-10T13:26:41.487 DEBUG:teuthology.orchestra.run.vm02:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T13:26:41.551 INFO:teuthology.orchestra.run.vm02.stderr:find: gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-10T13:26:41.552 INFO:teuthology.orchestra.run.vm02.stderr:‘/var/log/rbd-target-api’: No such file or directory 2026-03-10T13:26:41.552 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/cephadm.log: gzip -5 --verbose -- /var/log/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/ceph-mon.a.log 2026-03-10T13:26:41.552 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/ceph.log 2026-03-10T13:26:41.553 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/ceph-mon.a.log: 89.9% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-10T13:26:41.554 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/ceph-mgr.a.log 2026-03-10T13:26:41.554 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/ceph.log: 86.7% -- replaced with /var/log/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/ceph.log.gz 2026-03-10T13:26:41.560 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/ceph.audit.log 2026-03-10T13:26:41.570 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/ceph-mgr.a.log: gzip -5 --verbose -- /var/log/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/ceph.cephadm.log 2026-03-10T13:26:41.571 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/ceph.audit.log: 88.0% -- replaced with /var/log/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/ceph.audit.log.gz 2026-03-10T13:26:41.571 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/ceph-volume.log 2026-03-10T13:26:41.571 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/ceph.cephadm.log: 74.1% -- replaced with /var/log/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/ceph.cephadm.log.gz 2026-03-10T13:26:41.579 INFO:teuthology.orchestra.run.vm02.stderr: 90.7% -- replaced with /var/log/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/ceph-mgr.a.log.gz 2026-03-10T13:26:41.579 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/ceph-osd.0.log 2026-03-10T13:26:41.581 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/ceph-volume.log: 94.8% -- replaced with /var/log/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/ceph-volume.log.gz 2026-03-10T13:26:41.589 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/ceph-osd.0.log: 94.2% -- replaced with /var/log/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/ceph-osd.0.log.gz 2026-03-10T13:26:41.595 INFO:teuthology.orchestra.run.vm02.stderr: 92.2% -- replaced with /var/log/ceph/f4876d10-1c83-11f1-ae9f-3f8bea697626/ceph-mon.a.log.gz 2026-03-10T13:26:41.597 INFO:teuthology.orchestra.run.vm02.stderr: 2026-03-10T13:26:41.597 INFO:teuthology.orchestra.run.vm02.stderr:real 0m0.055s 2026-03-10T13:26:41.597 INFO:teuthology.orchestra.run.vm02.stderr:user 0m0.074s 2026-03-10T13:26:41.597 INFO:teuthology.orchestra.run.vm02.stderr:sys 0m0.018s 2026-03-10T13:26:41.597 INFO:tasks.cephadm:Archiving logs... 2026-03-10T13:26:41.597 DEBUG:teuthology.misc:Transferring archived files from vm02:/var/log/ceph to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1045/remote/vm02/log 2026-03-10T13:26:41.597 DEBUG:teuthology.orchestra.run.vm02:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-10T13:26:41.667 INFO:tasks.cephadm:Removing cluster... 2026-03-10T13:26:41.667 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid f4876d10-1c83-11f1-ae9f-3f8bea697626 --force 2026-03-10T13:26:41.835 INFO:teuthology.orchestra.run.vm02.stdout:Deleting cluster with fsid: f4876d10-1c83-11f1-ae9f-3f8bea697626 2026-03-10T13:26:42.063 INFO:tasks.cephadm:Removing cephadm ... 2026-03-10T13:26:42.063 DEBUG:teuthology.orchestra.run.vm02:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-10T13:26:42.079 INFO:tasks.cephadm:Teardown complete 2026-03-10T13:26:42.079 DEBUG:teuthology.run_tasks:Unwinding manager install 2026-03-10T13:26:42.083 INFO:teuthology.task.install.util:Removing shipped files: /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer... 2026-03-10T13:26:42.083 DEBUG:teuthology.orchestra.run.vm02:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-10T13:26:42.152 INFO:teuthology.task.install.rpm:Removing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd on rpm system. 2026-03-10T13:26:42.152 DEBUG:teuthology.orchestra.run.vm02:> 2026-03-10T13:26:42.152 DEBUG:teuthology.orchestra.run.vm02:> for d in ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd ; do 2026-03-10T13:26:42.152 DEBUG:teuthology.orchestra.run.vm02:> sudo yum -y remove $d || true 2026-03-10T13:26:42.152 DEBUG:teuthology.orchestra.run.vm02:> done 2026-03-10T13:26:42.371 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T13:26:42.371 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T13:26:42.371 INFO:teuthology.orchestra.run.vm02.stdout: Package Arch Version Repository Size 2026-03-10T13:26:42.371 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T13:26:42.371 INFO:teuthology.orchestra.run.vm02.stdout:Removing: 2026-03-10T13:26:42.371 INFO:teuthology.orchestra.run.vm02.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 39 M 2026-03-10T13:26:42.371 INFO:teuthology.orchestra.run.vm02.stdout:Removing unused dependencies: 2026-03-10T13:26:42.371 INFO:teuthology.orchestra.run.vm02.stdout: mailcap noarch 2.1.49-5.el9 @baseos 78 k 2026-03-10T13:26:42.371 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:26:42.371 INFO:teuthology.orchestra.run.vm02.stdout:Transaction Summary 2026-03-10T13:26:42.372 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T13:26:42.372 INFO:teuthology.orchestra.run.vm02.stdout:Remove 2 Packages 2026-03-10T13:26:42.372 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:26:42.372 INFO:teuthology.orchestra.run.vm02.stdout:Freed space: 39 M 2026-03-10T13:26:42.372 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction check 2026-03-10T13:26:42.374 INFO:teuthology.orchestra.run.vm02.stdout:Transaction check succeeded. 2026-03-10T13:26:42.374 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction test 2026-03-10T13:26:42.387 INFO:teuthology.orchestra.run.vm02.stdout:Transaction test succeeded. 2026-03-10T13:26:42.387 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction 2026-03-10T13:26:42.421 INFO:teuthology.orchestra.run.vm02.stdout: Preparing : 1/1 2026-03-10T13:26:42.445 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T13:26:42.445 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T13:26:42.445 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-10T13:26:42.445 INFO:teuthology.orchestra.run.vm02.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-radosgw.target". 2026-03-10T13:26:42.445 INFO:teuthology.orchestra.run.vm02.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-radosgw.target". 2026-03-10T13:26:42.445 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:26:42.448 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T13:26:42.459 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T13:26:42.474 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T13:26:42.550 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T13:26:42.550 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T13:26:42.601 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T13:26:42.601 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:26:42.601 INFO:teuthology.orchestra.run.vm02.stdout:Removed: 2026-03-10T13:26:42.601 INFO:teuthology.orchestra.run.vm02.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 mailcap-2.1.49-5.el9.noarch 2026-03-10T13:26:42.601 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:26:42.601 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T13:26:42.807 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T13:26:42.808 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T13:26:42.808 INFO:teuthology.orchestra.run.vm02.stdout: Package Arch Version Repository Size 2026-03-10T13:26:42.808 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T13:26:42.808 INFO:teuthology.orchestra.run.vm02.stdout:Removing: 2026-03-10T13:26:42.808 INFO:teuthology.orchestra.run.vm02.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 210 M 2026-03-10T13:26:42.808 INFO:teuthology.orchestra.run.vm02.stdout:Removing unused dependencies: 2026-03-10T13:26:42.808 INFO:teuthology.orchestra.run.vm02.stdout: libxslt x86_64 1.1.34-12.el9 @appstream 743 k 2026-03-10T13:26:42.808 INFO:teuthology.orchestra.run.vm02.stdout: socat x86_64 1.7.4.1-8.el9 @appstream 1.1 M 2026-03-10T13:26:42.808 INFO:teuthology.orchestra.run.vm02.stdout: xmlstarlet x86_64 1.6.1-20.el9 @appstream 195 k 2026-03-10T13:26:42.808 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:26:42.808 INFO:teuthology.orchestra.run.vm02.stdout:Transaction Summary 2026-03-10T13:26:42.808 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T13:26:42.808 INFO:teuthology.orchestra.run.vm02.stdout:Remove 4 Packages 2026-03-10T13:26:42.808 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:26:42.808 INFO:teuthology.orchestra.run.vm02.stdout:Freed space: 212 M 2026-03-10T13:26:42.808 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction check 2026-03-10T13:26:42.811 INFO:teuthology.orchestra.run.vm02.stdout:Transaction check succeeded. 2026-03-10T13:26:42.812 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction test 2026-03-10T13:26:42.835 INFO:teuthology.orchestra.run.vm02.stdout:Transaction test succeeded. 2026-03-10T13:26:42.835 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction 2026-03-10T13:26:42.899 INFO:teuthology.orchestra.run.vm02.stdout: Preparing : 1/1 2026-03-10T13:26:42.905 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-10T13:26:42.907 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : xmlstarlet-1.6.1-20.el9.x86_64 2/4 2026-03-10T13:26:42.911 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : libxslt-1.1.34-12.el9.x86_64 3/4 2026-03-10T13:26:42.927 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-10T13:26:42.991 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-10T13:26:42.992 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-10T13:26:42.992 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 2/4 2026-03-10T13:26:42.992 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 3/4 2026-03-10T13:26:43.044 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 4/4 2026-03-10T13:26:43.044 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:26:43.044 INFO:teuthology.orchestra.run.vm02.stdout:Removed: 2026-03-10T13:26:43.044 INFO:teuthology.orchestra.run.vm02.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 libxslt-1.1.34-12.el9.x86_64 2026-03-10T13:26:43.044 INFO:teuthology.orchestra.run.vm02.stdout: socat-1.7.4.1-8.el9.x86_64 xmlstarlet-1.6.1-20.el9.x86_64 2026-03-10T13:26:43.044 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:26:43.044 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T13:26:43.272 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T13:26:43.273 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T13:26:43.273 INFO:teuthology.orchestra.run.vm02.stdout: Package Arch Version Repository Size 2026-03-10T13:26:43.273 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T13:26:43.273 INFO:teuthology.orchestra.run.vm02.stdout:Removing: 2026-03-10T13:26:43.273 INFO:teuthology.orchestra.run.vm02.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 0 2026-03-10T13:26:43.273 INFO:teuthology.orchestra.run.vm02.stdout:Removing unused dependencies: 2026-03-10T13:26:43.273 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 7.5 M 2026-03-10T13:26:43.273 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 18 M 2026-03-10T13:26:43.273 INFO:teuthology.orchestra.run.vm02.stdout: lua x86_64 5.4.4-4.el9 @appstream 593 k 2026-03-10T13:26:43.273 INFO:teuthology.orchestra.run.vm02.stdout: lua-devel x86_64 5.4.4-4.el9 @crb 49 k 2026-03-10T13:26:43.273 INFO:teuthology.orchestra.run.vm02.stdout: luarocks noarch 3.9.2-5.el9 @epel 692 k 2026-03-10T13:26:43.273 INFO:teuthology.orchestra.run.vm02.stdout: unzip x86_64 6.0-59.el9 @baseos 389 k 2026-03-10T13:26:43.273 INFO:teuthology.orchestra.run.vm02.stdout: zip x86_64 3.0-35.el9 @baseos 724 k 2026-03-10T13:26:43.273 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:26:43.273 INFO:teuthology.orchestra.run.vm02.stdout:Transaction Summary 2026-03-10T13:26:43.273 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T13:26:43.273 INFO:teuthology.orchestra.run.vm02.stdout:Remove 8 Packages 2026-03-10T13:26:43.273 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:26:43.273 INFO:teuthology.orchestra.run.vm02.stdout:Freed space: 28 M 2026-03-10T13:26:43.273 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction check 2026-03-10T13:26:43.276 INFO:teuthology.orchestra.run.vm02.stdout:Transaction check succeeded. 2026-03-10T13:26:43.276 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction test 2026-03-10T13:26:43.297 INFO:teuthology.orchestra.run.vm02.stdout:Transaction test succeeded. 2026-03-10T13:26:43.298 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction 2026-03-10T13:26:43.350 INFO:teuthology.orchestra.run.vm02.stdout: Preparing : 1/1 2026-03-10T13:26:43.356 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-10T13:26:43.359 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : luarocks-3.9.2-5.el9.noarch 2/8 2026-03-10T13:26:43.361 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : lua-devel-5.4.4-4.el9.x86_64 3/8 2026-03-10T13:26:43.364 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : zip-3.0-35.el9.x86_64 4/8 2026-03-10T13:26:43.366 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : unzip-6.0-59.el9.x86_64 5/8 2026-03-10T13:26:43.368 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : lua-5.4.4-4.el9.x86_64 6/8 2026-03-10T13:26:43.390 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T13:26:43.390 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T13:26:43.390 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-10T13:26:43.390 INFO:teuthology.orchestra.run.vm02.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mds.target". 2026-03-10T13:26:43.390 INFO:teuthology.orchestra.run.vm02.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mds.target". 2026-03-10T13:26:43.390 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:26:43.391 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T13:26:43.399 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T13:26:43.421 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T13:26:43.421 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T13:26:43.421 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-10T13:26:43.421 INFO:teuthology.orchestra.run.vm02.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mon.target". 2026-03-10T13:26:43.421 INFO:teuthology.orchestra.run.vm02.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mon.target". 2026-03-10T13:26:43.421 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:26:43.423 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T13:26:43.508 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T13:26:43.508 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-10T13:26:43.508 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2/8 2026-03-10T13:26:43.508 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 3/8 2026-03-10T13:26:43.508 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : lua-5.4.4-4.el9.x86_64 4/8 2026-03-10T13:26:43.508 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 5/8 2026-03-10T13:26:43.508 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 6/8 2026-03-10T13:26:43.508 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : unzip-6.0-59.el9.x86_64 7/8 2026-03-10T13:26:43.564 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : zip-3.0-35.el9.x86_64 8/8 2026-03-10T13:26:43.564 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:26:43.564 INFO:teuthology.orchestra.run.vm02.stdout:Removed: 2026-03-10T13:26:43.564 INFO:teuthology.orchestra.run.vm02.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:26:43.564 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:26:43.564 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:26:43.564 INFO:teuthology.orchestra.run.vm02.stdout: lua-5.4.4-4.el9.x86_64 2026-03-10T13:26:43.564 INFO:teuthology.orchestra.run.vm02.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-10T13:26:43.564 INFO:teuthology.orchestra.run.vm02.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-10T13:26:43.564 INFO:teuthology.orchestra.run.vm02.stdout: unzip-6.0-59.el9.x86_64 2026-03-10T13:26:43.564 INFO:teuthology.orchestra.run.vm02.stdout: zip-3.0-35.el9.x86_64 2026-03-10T13:26:43.564 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:26:43.564 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T13:26:43.795 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T13:26:43.801 INFO:teuthology.orchestra.run.vm02.stdout:=========================================================================================== 2026-03-10T13:26:43.801 INFO:teuthology.orchestra.run.vm02.stdout: Package Arch Version Repository Size 2026-03-10T13:26:43.801 INFO:teuthology.orchestra.run.vm02.stdout:=========================================================================================== 2026-03-10T13:26:43.801 INFO:teuthology.orchestra.run.vm02.stdout:Removing: 2026-03-10T13:26:43.801 INFO:teuthology.orchestra.run.vm02.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 23 M 2026-03-10T13:26:43.801 INFO:teuthology.orchestra.run.vm02.stdout:Removing dependent packages: 2026-03-10T13:26:43.801 INFO:teuthology.orchestra.run.vm02.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 431 k 2026-03-10T13:26:43.801 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.4 M 2026-03-10T13:26:43.801 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 806 k 2026-03-10T13:26:43.801 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 88 M 2026-03-10T13:26:43.801 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 66 M 2026-03-10T13:26:43.801 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 563 k 2026-03-10T13:26:43.801 INFO:teuthology.orchestra.run.vm02.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 59 M 2026-03-10T13:26:43.801 INFO:teuthology.orchestra.run.vm02.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.4 M 2026-03-10T13:26:43.801 INFO:teuthology.orchestra.run.vm02.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-10T13:26:43.801 INFO:teuthology.orchestra.run.vm02.stdout:Removing unused dependencies: 2026-03-10T13:26:43.801 INFO:teuthology.orchestra.run.vm02.stdout: abseil-cpp x86_64 20211102.0-4.el9 @epel 1.9 M 2026-03-10T13:26:43.801 INFO:teuthology.orchestra.run.vm02.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 85 M 2026-03-10T13:26:43.801 INFO:teuthology.orchestra.run.vm02.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 628 k 2026-03-10T13:26:43.801 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.5 M 2026-03-10T13:26:43.801 INFO:teuthology.orchestra.run.vm02.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 52 k 2026-03-10T13:26:43.801 INFO:teuthology.orchestra.run.vm02.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 138 k 2026-03-10T13:26:43.801 INFO:teuthology.orchestra.run.vm02.stdout: cryptsetup x86_64 2.8.1-3.el9 @baseos 770 k 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: flexiblas x86_64 3.0.4-9.el9 @appstream 68 k 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 @appstream 11 M 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 @appstream 39 k 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: gperftools-libs x86_64 2.9.1-3.el9 @epel 1.4 M 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: grpc-data noarch 1.46.7-10.el9 @epel 13 k 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: ledmon-libs x86_64 1.1.0-3.el9 @baseos 80 k 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 425 k 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: libconfig x86_64 1.7.2-9.el9 @baseos 220 k 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: libgfortran x86_64 11.5.0-14.el9 @baseos 2.8 M 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: liboath x86_64 2.6.12-1.el9 @epel 94 k 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: libquadmath x86_64 11.5.0-14.el9 @baseos 330 k 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.6 M 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: libstoragemgmt x86_64 1.10.1-1.el9 @appstream 685 k 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: libunwind x86_64 1.6.2-1.el9 @epel 170 k 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: openblas x86_64 0.3.29-1.el9 @appstream 112 k 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: openblas-openmp x86_64 0.3.29-1.el9 @appstream 46 M 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: pciutils x86_64 3.7.0-7.el9 @baseos 216 k 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: protobuf x86_64 3.14.0-17.el9 @appstream 3.5 M 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: protobuf-compiler x86_64 3.14.0-17.el9 @crb 2.9 M 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: python3-asyncssh noarch 2.13.2-5.el9 @epel 3.9 M 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: python3-autocommand noarch 2.2.2-8.el9 @epel 82 k 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: python3-babel noarch 2.9.1-2.el9 @appstream 27 M 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 @epel 254 k 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: python3-bcrypt x86_64 3.2.2-1.el9 @epel 87 k 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: python3-cachetools noarch 4.2.4-1.el9 @epel 93 k 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 702 k 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: python3-certifi noarch 2023.05.07-4.el9 @epel 6.3 k 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: python3-cffi x86_64 1.14.5-5.el9 @baseos 1.0 M 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: python3-chardet noarch 4.0.0-5.el9 @anaconda 1.4 M 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: python3-cheroot noarch 10.0.1-4.el9 @epel 682 k 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: python3-cherrypy noarch 18.6.1-2.el9 @epel 1.1 M 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: python3-cryptography x86_64 36.0.1-5.el9 @baseos 4.5 M 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: python3-devel x86_64 3.9.25-3.el9 @appstream 765 k 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: python3-google-auth noarch 1:2.45.0-1.el9 @epel 1.4 M 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: python3-grpcio x86_64 1.46.7-10.el9 @epel 6.7 M 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 @epel 418 k 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: python3-idna noarch 2.10-7.el9.1 @anaconda 513 k 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco noarch 8.2.1-3.el9 @epel 3.7 k 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 @epel 24 k 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 @epel 55 k 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-context noarch 6.0.1-3.el9 @epel 31 k 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 @epel 33 k 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-text noarch 4.0.0-2.el9 @epel 51 k 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: python3-jinja2 noarch 2.11.3-8.el9 @appstream 1.1 M 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: python3-jsonpatch noarch 1.21-16.el9 @koji-override-0 55 k 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: python3-jsonpointer noarch 2.0-4.el9 @koji-override-0 34 k 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 @epel 21 M 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 @appstream 832 k 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: python3-logutils noarch 0.3.5-21.el9 @epel 126 k 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: python3-mako noarch 1.1.4-6.el9 @appstream 534 k 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: python3-markupsafe x86_64 1.1.1-12.el9 @appstream 60 k 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: python3-more-itertools noarch 8.12.0-2.el9 @epel 378 k 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: python3-natsort noarch 7.1.1-5.el9 @epel 215 k 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: python3-numpy x86_64 1:1.23.5-2.el9 @appstream 30 M 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 @appstream 1.7 M 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: python3-oauthlib noarch 3.1.1-5.el9 @koji-override-0 888 k 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: python3-packaging noarch 20.9-5.el9 @appstream 248 k 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: python3-pecan noarch 1.4.2-3.el9 @epel 1.3 M 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: python3-ply noarch 3.11-14.el9 @baseos 430 k 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: python3-portend noarch 3.1.0-2.el9 @epel 20 k 2026-03-10T13:26:43.802 INFO:teuthology.orchestra.run.vm02.stdout: python3-prettytable noarch 0.7.2-27.el9 @koji-override-0 166 k 2026-03-10T13:26:43.803 INFO:teuthology.orchestra.run.vm02.stdout: python3-protobuf noarch 3.14.0-17.el9 @appstream 1.4 M 2026-03-10T13:26:43.803 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 @epel 389 k 2026-03-10T13:26:43.803 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyasn1 noarch 0.4.8-7.el9 @appstream 622 k 2026-03-10T13:26:43.803 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 @appstream 1.0 M 2026-03-10T13:26:43.803 INFO:teuthology.orchestra.run.vm02.stdout: python3-pycparser noarch 2.20-6.el9 @baseos 745 k 2026-03-10T13:26:43.803 INFO:teuthology.orchestra.run.vm02.stdout: python3-pysocks noarch 1.7.1-12.el9 @anaconda 88 k 2026-03-10T13:26:43.803 INFO:teuthology.orchestra.run.vm02.stdout: python3-pytz noarch 2021.1-5.el9 @koji-override-0 176 k 2026-03-10T13:26:43.803 INFO:teuthology.orchestra.run.vm02.stdout: python3-repoze-lru noarch 0.7-16.el9 @epel 83 k 2026-03-10T13:26:43.803 INFO:teuthology.orchestra.run.vm02.stdout: python3-requests noarch 2.25.1-10.el9 @baseos 405 k 2026-03-10T13:26:43.803 INFO:teuthology.orchestra.run.vm02.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 @appstream 119 k 2026-03-10T13:26:43.803 INFO:teuthology.orchestra.run.vm02.stdout: python3-routes noarch 2.5.1-5.el9 @epel 459 k 2026-03-10T13:26:43.803 INFO:teuthology.orchestra.run.vm02.stdout: python3-rsa noarch 4.9-2.el9 @epel 202 k 2026-03-10T13:26:43.803 INFO:teuthology.orchestra.run.vm02.stdout: python3-scipy x86_64 1.9.3-2.el9 @appstream 76 M 2026-03-10T13:26:43.803 INFO:teuthology.orchestra.run.vm02.stdout: python3-tempora noarch 5.0.0-2.el9 @epel 96 k 2026-03-10T13:26:43.803 INFO:teuthology.orchestra.run.vm02.stdout: python3-toml noarch 0.10.2-6.el9 @appstream 99 k 2026-03-10T13:26:43.803 INFO:teuthology.orchestra.run.vm02.stdout: python3-typing-extensions noarch 4.15.0-1.el9 @epel 447 k 2026-03-10T13:26:43.803 INFO:teuthology.orchestra.run.vm02.stdout: python3-urllib3 noarch 1.26.5-7.el9 @baseos 746 k 2026-03-10T13:26:43.803 INFO:teuthology.orchestra.run.vm02.stdout: python3-webob noarch 1.8.8-2.el9 @epel 1.2 M 2026-03-10T13:26:43.803 INFO:teuthology.orchestra.run.vm02.stdout: python3-websocket-client noarch 1.2.3-2.el9 @epel 319 k 2026-03-10T13:26:43.803 INFO:teuthology.orchestra.run.vm02.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 @epel 1.9 M 2026-03-10T13:26:43.803 INFO:teuthology.orchestra.run.vm02.stdout: python3-zc-lockfile noarch 2.0-10.el9 @epel 35 k 2026-03-10T13:26:43.803 INFO:teuthology.orchestra.run.vm02.stdout: qatlib x86_64 25.08.0-2.el9 @appstream 639 k 2026-03-10T13:26:43.803 INFO:teuthology.orchestra.run.vm02.stdout: qatlib-service x86_64 25.08.0-2.el9 @appstream 69 k 2026-03-10T13:26:43.803 INFO:teuthology.orchestra.run.vm02.stdout: qatzip-libs x86_64 1.3.1-1.el9 @appstream 148 k 2026-03-10T13:26:43.803 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:26:43.803 INFO:teuthology.orchestra.run.vm02.stdout:Transaction Summary 2026-03-10T13:26:43.803 INFO:teuthology.orchestra.run.vm02.stdout:=========================================================================================== 2026-03-10T13:26:43.803 INFO:teuthology.orchestra.run.vm02.stdout:Remove 102 Packages 2026-03-10T13:26:43.803 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:26:43.803 INFO:teuthology.orchestra.run.vm02.stdout:Freed space: 613 M 2026-03-10T13:26:43.803 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction check 2026-03-10T13:26:43.829 INFO:teuthology.orchestra.run.vm02.stdout:Transaction check succeeded. 2026-03-10T13:26:43.829 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction test 2026-03-10T13:26:43.934 INFO:teuthology.orchestra.run.vm02.stdout:Transaction test succeeded. 2026-03-10T13:26:43.934 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction 2026-03-10T13:26:44.107 INFO:teuthology.orchestra.run.vm02.stdout: Preparing : 1/1 2026-03-10T13:26:44.107 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/102 2026-03-10T13:26:44.116 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/102 2026-03-10T13:26:44.138 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T13:26:44.139 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T13:26:44.139 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-10T13:26:44.139 INFO:teuthology.orchestra.run.vm02.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mgr.target". 2026-03-10T13:26:44.139 INFO:teuthology.orchestra.run.vm02.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mgr.target". 2026-03-10T13:26:44.139 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:26:44.139 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T13:26:44.155 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T13:26:44.179 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 3/102 2026-03-10T13:26:44.180 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/102 2026-03-10T13:26:44.237 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/102 2026-03-10T13:26:44.246 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-kubernetes-1:26.1.0-3.el9.noarch 5/102 2026-03-10T13:26:44.253 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-requests-oauthlib-1.3.0-12.el9.noarch 6/102 2026-03-10T13:26:44.253 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-10T13:26:44.265 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-10T13:26:44.272 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-cherrypy-18.6.1-2.el9.noarch 8/102 2026-03-10T13:26:44.281 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-cheroot-10.0.1-4.el9.noarch 9/102 2026-03-10T13:26:44.289 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-grpcio-tools-1.46.7-10.el9.x86_64 10/102 2026-03-10T13:26:44.293 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-grpcio-1.46.7-10.el9.x86_64 11/102 2026-03-10T13:26:44.314 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T13:26:44.315 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T13:26:44.315 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-10T13:26:44.315 INFO:teuthology.orchestra.run.vm02.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-osd.target". 2026-03-10T13:26:44.315 INFO:teuthology.orchestra.run.vm02.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-osd.target". 2026-03-10T13:26:44.315 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:26:44.319 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T13:26:44.330 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T13:26:44.347 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-10T13:26:44.347 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T13:26:44.348 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-10T13:26:44.348 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:26:44.356 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-10T13:26:44.366 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-10T13:26:44.371 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-jaraco-collections-3.0.0-8.el9.noarch 14/102 2026-03-10T13:26:44.377 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-jaraco-text-4.0.0-2.el9.noarch 15/102 2026-03-10T13:26:44.383 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-jinja2-2.11.3-8.el9.noarch 16/102 2026-03-10T13:26:44.397 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-requests-2.25.1-10.el9.noarch 17/102 2026-03-10T13:26:44.410 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-google-auth-1:2.45.0-1.el9.noarch 18/102 2026-03-10T13:26:44.421 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-pecan-1.4.2-3.el9.noarch 19/102 2026-03-10T13:26:44.438 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-rsa-4.9-2.el9.noarch 20/102 2026-03-10T13:26:44.444 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-pyasn1-modules-0.4.8-7.el9.noarch 21/102 2026-03-10T13:26:44.474 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-urllib3-1.26.5-7.el9.noarch 22/102 2026-03-10T13:26:44.481 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-babel-2.9.1-2.el9.noarch 23/102 2026-03-10T13:26:44.484 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-jaraco-classes-3.2.1-5.el9.noarch 24/102 2026-03-10T13:26:44.493 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-pyOpenSSL-21.0.0-1.el9.noarch 25/102 2026-03-10T13:26:44.504 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-asyncssh-2.13.2-5.el9.noarch 26/102 2026-03-10T13:26:44.504 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/102 2026-03-10T13:26:44.513 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/102 2026-03-10T13:26:44.615 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-jsonpatch-1.21-16.el9.noarch 28/102 2026-03-10T13:26:44.630 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-scipy-1.9.3-2.el9.x86_64 29/102 2026-03-10T13:26:44.644 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-10T13:26:44.644 INFO:teuthology.orchestra.run.vm02.stdout:Removed "/etc/systemd/system/multi-user.target.wants/libstoragemgmt.service". 2026-03-10T13:26:44.644 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:26:44.645 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-10T13:26:44.671 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-10T13:26:44.686 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 31/102 2026-03-10T13:26:44.692 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-cryptography-36.0.1-5.el9.x86_64 32/102 2026-03-10T13:26:44.694 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : protobuf-compiler-3.14.0-17.el9.x86_64 33/102 2026-03-10T13:26:44.697 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-bcrypt-3.2.2-1.el9.x86_64 34/102 2026-03-10T13:26:44.717 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-10T13:26:44.717 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T13:26:44.717 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-10T13:26:44.717 INFO:teuthology.orchestra.run.vm02.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target". 2026-03-10T13:26:44.717 INFO:teuthology.orchestra.run.vm02.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target". 2026-03-10T13:26:44.717 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:26:44.719 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-10T13:26:44.730 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-10T13:26:44.733 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-mako-1.1.4-6.el9.noarch 36/102 2026-03-10T13:26:44.736 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-jaraco-context-6.0.1-3.el9.noarch 37/102 2026-03-10T13:26:44.738 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-portend-3.1.0-2.el9.noarch 38/102 2026-03-10T13:26:44.741 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-tempora-5.0.0-2.el9.noarch 39/102 2026-03-10T13:26:44.744 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-jaraco-functools-3.5.0-2.el9.noarch 40/102 2026-03-10T13:26:44.748 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-routes-2.5.1-5.el9.noarch 41/102 2026-03-10T13:26:44.752 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-cffi-1.14.5-5.el9.x86_64 42/102 2026-03-10T13:26:44.799 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-pycparser-2.20-6.el9.noarch 43/102 2026-03-10T13:26:44.810 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-numpy-1:1.23.5-2.el9.x86_64 44/102 2026-03-10T13:26:44.812 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : flexiblas-netlib-3.0.4-9.el9.x86_64 45/102 2026-03-10T13:26:44.817 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 46/102 2026-03-10T13:26:44.820 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : openblas-openmp-0.3.29-1.el9.x86_64 47/102 2026-03-10T13:26:44.823 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : libgfortran-11.5.0-14.el9.x86_64 48/102 2026-03-10T13:26:44.826 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 49/102 2026-03-10T13:26:44.847 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-10T13:26:44.847 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T13:26:44.847 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-10T13:26:44.847 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:26:44.847 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-10T13:26:44.855 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-10T13:26:44.856 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : openblas-0.3.29-1.el9.x86_64 51/102 2026-03-10T13:26:44.858 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : flexiblas-3.0.4-9.el9.x86_64 52/102 2026-03-10T13:26:44.861 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-ply-3.11-14.el9.noarch 53/102 2026-03-10T13:26:44.863 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-repoze-lru-0.7-16.el9.noarch 54/102 2026-03-10T13:26:44.865 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-jaraco-8.2.1-3.el9.noarch 55/102 2026-03-10T13:26:44.868 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-more-itertools-8.12.0-2.el9.noarch 56/102 2026-03-10T13:26:44.871 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-toml-0.10.2-6.el9.noarch 57/102 2026-03-10T13:26:44.873 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-pytz-2021.1-5.el9.noarch 58/102 2026-03-10T13:26:44.880 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-backports-tarfile-1.2.0-1.el9.noarch 59/102 2026-03-10T13:26:44.885 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-devel-3.9.25-3.el9.x86_64 60/102 2026-03-10T13:26:44.887 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-jsonpointer-2.0-4.el9.noarch 61/102 2026-03-10T13:26:44.889 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-typing-extensions-4.15.0-1.el9.noarch 62/102 2026-03-10T13:26:44.891 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-idna-2.10-7.el9.1.noarch 63/102 2026-03-10T13:26:44.896 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-pysocks-1.7.1-12.el9.noarch 64/102 2026-03-10T13:26:44.900 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-pyasn1-0.4.8-7.el9.noarch 65/102 2026-03-10T13:26:44.905 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-logutils-0.3.5-21.el9.noarch 66/102 2026-03-10T13:26:44.909 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-webob-1.8.8-2.el9.noarch 67/102 2026-03-10T13:26:44.914 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-cachetools-4.2.4-1.el9.noarch 68/102 2026-03-10T13:26:44.917 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-chardet-4.0.0-5.el9.noarch 69/102 2026-03-10T13:26:44.920 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-autocommand-2.2.2-8.el9.noarch 70/102 2026-03-10T13:26:44.923 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-packaging-20.9-5.el9.noarch 71/102 2026-03-10T13:26:44.928 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : grpc-data-1.46.7-10.el9.noarch 72/102 2026-03-10T13:26:44.931 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-protobuf-3.14.0-17.el9.noarch 73/102 2026-03-10T13:26:44.934 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-zc-lockfile-2.0-10.el9.noarch 74/102 2026-03-10T13:26:44.943 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-natsort-7.1.1-5.el9.noarch 75/102 2026-03-10T13:26:44.948 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-oauthlib-3.1.1-5.el9.noarch 76/102 2026-03-10T13:26:44.951 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-websocket-client-1.2.3-2.el9.noarch 77/102 2026-03-10T13:26:44.953 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-certifi-2023.05.07-4.el9.noarch 78/102 2026-03-10T13:26:44.955 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 79/102 2026-03-10T13:26:44.960 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 80/102 2026-03-10T13:26:44.963 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-werkzeug-2.0.3-3.el9.1.noarch 81/102 2026-03-10T13:26:44.983 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-10T13:26:44.983 INFO:teuthology.orchestra.run.vm02.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-crash.service". 2026-03-10T13:26:44.983 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:26:44.990 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-10T13:26:45.015 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-10T13:26:45.015 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 83/102 2026-03-10T13:26:45.026 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 83/102 2026-03-10T13:26:45.030 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : qatzip-libs-1.3.1-1.el9.x86_64 84/102 2026-03-10T13:26:45.033 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 85/102 2026-03-10T13:26:45.034 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-prettytable-0.7.2-27.el9.noarch 86/102 2026-03-10T13:26:45.035 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 87/102 2026-03-10T13:26:50.521 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 87/102 2026-03-10T13:26:50.521 INFO:teuthology.orchestra.run.vm02.stdout:skipping the directory /sys 2026-03-10T13:26:50.521 INFO:teuthology.orchestra.run.vm02.stdout:skipping the directory /proc 2026-03-10T13:26:50.521 INFO:teuthology.orchestra.run.vm02.stdout:skipping the directory /mnt 2026-03-10T13:26:50.521 INFO:teuthology.orchestra.run.vm02.stdout:skipping the directory /var/tmp 2026-03-10T13:26:50.521 INFO:teuthology.orchestra.run.vm02.stdout:skipping the directory /home 2026-03-10T13:26:50.521 INFO:teuthology.orchestra.run.vm02.stdout:skipping the directory /root 2026-03-10T13:26:50.521 INFO:teuthology.orchestra.run.vm02.stdout:skipping the directory /tmp 2026-03-10T13:26:50.521 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:26:50.529 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : qatlib-25.08.0-2.el9.x86_64 88/102 2026-03-10T13:26:50.545 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-10T13:26:50.545 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-10T13:26:50.552 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-10T13:26:50.555 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : gperftools-libs-2.9.1-3.el9.x86_64 90/102 2026-03-10T13:26:50.557 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : libunwind-1.6.2-1.el9.x86_64 91/102 2026-03-10T13:26:50.560 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : pciutils-3.7.0-7.el9.x86_64 92/102 2026-03-10T13:26:50.562 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : liboath-2.6.12-1.el9.x86_64 93/102 2026-03-10T13:26:50.562 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 94/102 2026-03-10T13:26:50.575 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 94/102 2026-03-10T13:26:50.576 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ledmon-libs-1.1.0-3.el9.x86_64 95/102 2026-03-10T13:26:50.578 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : libquadmath-11.5.0-14.el9.x86_64 96/102 2026-03-10T13:26:50.581 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-markupsafe-1.1.1-12.el9.x86_64 97/102 2026-03-10T13:26:50.583 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : protobuf-3.14.0-17.el9.x86_64 98/102 2026-03-10T13:26:50.588 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : libconfig-1.7.2-9.el9.x86_64 99/102 2026-03-10T13:26:50.597 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : cryptsetup-2.8.1-3.el9.x86_64 100/102 2026-03-10T13:26:50.601 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : abseil-cpp-20211102.0-4.el9.x86_64 101/102 2026-03-10T13:26:50.601 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-10T13:26:50.708 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-10T13:26:50.708 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 1/102 2026-03-10T13:26:50.708 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T13:26:50.708 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/102 2026-03-10T13:26:50.708 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 4/102 2026-03-10T13:26:50.708 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/102 2026-03-10T13:26:50.708 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 6/102 2026-03-10T13:26:50.708 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-10T13:26:50.708 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 8/102 2026-03-10T13:26:50.708 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 9/102 2026-03-10T13:26:50.708 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 10/102 2026-03-10T13:26:50.708 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 11/102 2026-03-10T13:26:50.708 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T13:26:50.708 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 13/102 2026-03-10T13:26:50.708 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 14/102 2026-03-10T13:26:50.708 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 15/102 2026-03-10T13:26:50.708 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 16/102 2026-03-10T13:26:50.708 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 17/102 2026-03-10T13:26:50.708 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 18/102 2026-03-10T13:26:50.708 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 19/102 2026-03-10T13:26:50.710 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 20/102 2026-03-10T13:26:50.710 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 21/102 2026-03-10T13:26:50.710 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 22/102 2026-03-10T13:26:50.710 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 23/102 2026-03-10T13:26:50.710 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 24/102 2026-03-10T13:26:50.710 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 25/102 2026-03-10T13:26:50.710 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 26/102 2026-03-10T13:26:50.710 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 27/102 2026-03-10T13:26:50.710 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 28/102 2026-03-10T13:26:50.710 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 29/102 2026-03-10T13:26:50.710 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 30/102 2026-03-10T13:26:50.710 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 31/102 2026-03-10T13:26:50.710 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 32/102 2026-03-10T13:26:50.710 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 33/102 2026-03-10T13:26:50.710 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 34/102 2026-03-10T13:26:50.710 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 35/102 2026-03-10T13:26:50.710 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 36/102 2026-03-10T13:26:50.710 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 37/102 2026-03-10T13:26:50.710 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 38/102 2026-03-10T13:26:50.710 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 39/102 2026-03-10T13:26:50.710 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 40/102 2026-03-10T13:26:50.710 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 41/102 2026-03-10T13:26:50.710 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 42/102 2026-03-10T13:26:50.710 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 43/102 2026-03-10T13:26:50.710 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/102 2026-03-10T13:26:50.710 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-chardet-4.0.0-5.el9.noarch 45/102 2026-03-10T13:26:50.710 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 46/102 2026-03-10T13:26:50.710 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 47/102 2026-03-10T13:26:50.710 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 48/102 2026-03-10T13:26:50.710 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 49/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 50/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 51/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 52/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-idna-2.10-7.el9.1.noarch 53/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 54/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 55/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 56/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 57/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 58/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 59/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 60/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jsonpatch-1.21-16.el9.noarch 61/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jsonpointer-2.0-4.el9.noarch 62/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 63/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 64/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 65/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 66/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 67/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 68/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 69/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 70/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 71/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-oauthlib-3.1.1-5.el9.noarch 72/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 73/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 74/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-ply-3.11-14.el9.noarch 75/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 76/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-prettytable-0.7.2-27.el9.noarch 77/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 78/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 79/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 80/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 81/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 82/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-pysocks-1.7.1-12.el9.noarch 83/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-pytz-2021.1-5.el9.noarch 84/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 85/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 86/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 87/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 88/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 89/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 90/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 91/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 92/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 93/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 94/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 95/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 96/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 97/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 98/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 99/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 100/102 2026-03-10T13:26:50.711 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 101/102 2026-03-10T13:26:50.790 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout:Removed: 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: python3-chardet-4.0.0-5.el9.noarch 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-10T13:26:50.791 INFO:teuthology.orchestra.run.vm02.stdout: python3-idna-2.10-7.el9.1.noarch 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: python3-jsonpatch-1.21-16.el9.noarch 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: python3-jsonpointer-2.0-4.el9.noarch 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: python3-oauthlib-3.1.1-5.el9.noarch 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: python3-ply-3.11-14.el9.noarch 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: python3-prettytable-0.7.2-27.el9.noarch 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: python3-pysocks-1.7.1-12.el9.noarch 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: python3-pytz-2021.1-5.el9.noarch 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:26:50.792 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T13:26:51.033 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T13:26:51.033 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T13:26:51.033 INFO:teuthology.orchestra.run.vm02.stdout: Package Arch Version Repository Size 2026-03-10T13:26:51.033 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T13:26:51.033 INFO:teuthology.orchestra.run.vm02.stdout:Removing: 2026-03-10T13:26:51.033 INFO:teuthology.orchestra.run.vm02.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 775 k 2026-03-10T13:26:51.033 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:26:51.033 INFO:teuthology.orchestra.run.vm02.stdout:Transaction Summary 2026-03-10T13:26:51.033 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T13:26:51.033 INFO:teuthology.orchestra.run.vm02.stdout:Remove 1 Package 2026-03-10T13:26:51.033 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:26:51.033 INFO:teuthology.orchestra.run.vm02.stdout:Freed space: 775 k 2026-03-10T13:26:51.034 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction check 2026-03-10T13:26:51.035 INFO:teuthology.orchestra.run.vm02.stdout:Transaction check succeeded. 2026-03-10T13:26:51.035 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction test 2026-03-10T13:26:51.037 INFO:teuthology.orchestra.run.vm02.stdout:Transaction test succeeded. 2026-03-10T13:26:51.037 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction 2026-03-10T13:26:51.054 INFO:teuthology.orchestra.run.vm02.stdout: Preparing : 1/1 2026-03-10T13:26:51.054 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T13:26:51.164 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T13:26:51.201 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T13:26:51.201 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:26:51.201 INFO:teuthology.orchestra.run.vm02.stdout:Removed: 2026-03-10T13:26:51.201 INFO:teuthology.orchestra.run.vm02.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:26:51.201 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:26:51.201 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T13:26:51.393 INFO:teuthology.orchestra.run.vm02.stdout:No match for argument: ceph-immutable-object-cache 2026-03-10T13:26:51.393 INFO:teuthology.orchestra.run.vm02.stderr:No packages marked for removal. 2026-03-10T13:26:51.396 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T13:26:51.397 INFO:teuthology.orchestra.run.vm02.stdout:Nothing to do. 2026-03-10T13:26:51.397 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T13:26:51.565 INFO:teuthology.orchestra.run.vm02.stdout:No match for argument: ceph-mgr 2026-03-10T13:26:51.566 INFO:teuthology.orchestra.run.vm02.stderr:No packages marked for removal. 2026-03-10T13:26:51.569 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T13:26:51.569 INFO:teuthology.orchestra.run.vm02.stdout:Nothing to do. 2026-03-10T13:26:51.569 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T13:26:51.735 INFO:teuthology.orchestra.run.vm02.stdout:No match for argument: ceph-mgr-dashboard 2026-03-10T13:26:51.735 INFO:teuthology.orchestra.run.vm02.stderr:No packages marked for removal. 2026-03-10T13:26:51.738 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T13:26:51.739 INFO:teuthology.orchestra.run.vm02.stdout:Nothing to do. 2026-03-10T13:26:51.739 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T13:26:51.903 INFO:teuthology.orchestra.run.vm02.stdout:No match for argument: ceph-mgr-diskprediction-local 2026-03-10T13:26:51.904 INFO:teuthology.orchestra.run.vm02.stderr:No packages marked for removal. 2026-03-10T13:26:51.907 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T13:26:51.907 INFO:teuthology.orchestra.run.vm02.stdout:Nothing to do. 2026-03-10T13:26:51.907 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T13:26:52.073 INFO:teuthology.orchestra.run.vm02.stdout:No match for argument: ceph-mgr-rook 2026-03-10T13:26:52.074 INFO:teuthology.orchestra.run.vm02.stderr:No packages marked for removal. 2026-03-10T13:26:52.077 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T13:26:52.077 INFO:teuthology.orchestra.run.vm02.stdout:Nothing to do. 2026-03-10T13:26:52.077 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T13:26:52.248 INFO:teuthology.orchestra.run.vm02.stdout:No match for argument: ceph-mgr-cephadm 2026-03-10T13:26:52.248 INFO:teuthology.orchestra.run.vm02.stderr:No packages marked for removal. 2026-03-10T13:26:52.252 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T13:26:52.252 INFO:teuthology.orchestra.run.vm02.stdout:Nothing to do. 2026-03-10T13:26:52.252 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T13:26:52.436 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T13:26:52.436 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T13:26:52.436 INFO:teuthology.orchestra.run.vm02.stdout: Package Arch Version Repository Size 2026-03-10T13:26:52.436 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T13:26:52.436 INFO:teuthology.orchestra.run.vm02.stdout:Removing: 2026-03-10T13:26:52.436 INFO:teuthology.orchestra.run.vm02.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.6 M 2026-03-10T13:26:52.436 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:26:52.436 INFO:teuthology.orchestra.run.vm02.stdout:Transaction Summary 2026-03-10T13:26:52.436 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T13:26:52.436 INFO:teuthology.orchestra.run.vm02.stdout:Remove 1 Package 2026-03-10T13:26:52.436 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:26:52.437 INFO:teuthology.orchestra.run.vm02.stdout:Freed space: 3.6 M 2026-03-10T13:26:52.437 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction check 2026-03-10T13:26:52.438 INFO:teuthology.orchestra.run.vm02.stdout:Transaction check succeeded. 2026-03-10T13:26:52.438 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction test 2026-03-10T13:26:52.448 INFO:teuthology.orchestra.run.vm02.stdout:Transaction test succeeded. 2026-03-10T13:26:52.448 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction 2026-03-10T13:26:52.474 INFO:teuthology.orchestra.run.vm02.stdout: Preparing : 1/1 2026-03-10T13:26:52.489 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T13:26:52.553 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T13:26:52.602 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T13:26:52.602 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:26:52.602 INFO:teuthology.orchestra.run.vm02.stdout:Removed: 2026-03-10T13:26:52.602 INFO:teuthology.orchestra.run.vm02.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:26:52.602 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:26:52.602 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T13:26:52.781 INFO:teuthology.orchestra.run.vm02.stdout:No match for argument: ceph-volume 2026-03-10T13:26:52.782 INFO:teuthology.orchestra.run.vm02.stderr:No packages marked for removal. 2026-03-10T13:26:52.785 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T13:26:52.785 INFO:teuthology.orchestra.run.vm02.stdout:Nothing to do. 2026-03-10T13:26:52.785 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T13:26:52.960 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T13:26:52.961 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T13:26:52.961 INFO:teuthology.orchestra.run.vm02.stdout: Package Arch Version Repo Size 2026-03-10T13:26:52.961 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T13:26:52.961 INFO:teuthology.orchestra.run.vm02.stdout:Removing: 2026-03-10T13:26:52.961 INFO:teuthology.orchestra.run.vm02.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 456 k 2026-03-10T13:26:52.961 INFO:teuthology.orchestra.run.vm02.stdout:Removing dependent packages: 2026-03-10T13:26:52.961 INFO:teuthology.orchestra.run.vm02.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 153 k 2026-03-10T13:26:52.961 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:26:52.961 INFO:teuthology.orchestra.run.vm02.stdout:Transaction Summary 2026-03-10T13:26:52.961 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T13:26:52.961 INFO:teuthology.orchestra.run.vm02.stdout:Remove 2 Packages 2026-03-10T13:26:52.961 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:26:52.961 INFO:teuthology.orchestra.run.vm02.stdout:Freed space: 610 k 2026-03-10T13:26:52.961 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction check 2026-03-10T13:26:52.963 INFO:teuthology.orchestra.run.vm02.stdout:Transaction check succeeded. 2026-03-10T13:26:52.963 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction test 2026-03-10T13:26:52.973 INFO:teuthology.orchestra.run.vm02.stdout:Transaction test succeeded. 2026-03-10T13:26:52.973 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction 2026-03-10T13:26:52.999 INFO:teuthology.orchestra.run.vm02.stdout: Preparing : 1/1 2026-03-10T13:26:53.001 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T13:26:53.014 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T13:26:53.077 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T13:26:53.077 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T13:26:53.119 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T13:26:53.120 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:26:53.120 INFO:teuthology.orchestra.run.vm02.stdout:Removed: 2026-03-10T13:26:53.120 INFO:teuthology.orchestra.run.vm02.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:26:53.120 INFO:teuthology.orchestra.run.vm02.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:26:53.120 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:26:53.120 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T13:26:53.315 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T13:26:53.316 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T13:26:53.316 INFO:teuthology.orchestra.run.vm02.stdout: Package Arch Version Repo Size 2026-03-10T13:26:53.316 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T13:26:53.316 INFO:teuthology.orchestra.run.vm02.stdout:Removing: 2026-03-10T13:26:53.316 INFO:teuthology.orchestra.run.vm02.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.0 M 2026-03-10T13:26:53.316 INFO:teuthology.orchestra.run.vm02.stdout:Removing dependent packages: 2026-03-10T13:26:53.316 INFO:teuthology.orchestra.run.vm02.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 514 k 2026-03-10T13:26:53.316 INFO:teuthology.orchestra.run.vm02.stdout:Removing unused dependencies: 2026-03-10T13:26:53.316 INFO:teuthology.orchestra.run.vm02.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 187 k 2026-03-10T13:26:53.316 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:26:53.316 INFO:teuthology.orchestra.run.vm02.stdout:Transaction Summary 2026-03-10T13:26:53.316 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T13:26:53.316 INFO:teuthology.orchestra.run.vm02.stdout:Remove 3 Packages 2026-03-10T13:26:53.316 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:26:53.316 INFO:teuthology.orchestra.run.vm02.stdout:Freed space: 3.7 M 2026-03-10T13:26:53.316 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction check 2026-03-10T13:26:53.318 INFO:teuthology.orchestra.run.vm02.stdout:Transaction check succeeded. 2026-03-10T13:26:53.318 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction test 2026-03-10T13:26:53.333 INFO:teuthology.orchestra.run.vm02.stdout:Transaction test succeeded. 2026-03-10T13:26:53.333 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction 2026-03-10T13:26:53.364 INFO:teuthology.orchestra.run.vm02.stdout: Preparing : 1/1 2026-03-10T13:26:53.366 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-10T13:26:53.367 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-10T13:26:53.367 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T13:26:53.426 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T13:26:53.426 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-10T13:26:53.426 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-10T13:26:53.460 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T13:26:53.460 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:26:53.460 INFO:teuthology.orchestra.run.vm02.stdout:Removed: 2026-03-10T13:26:53.460 INFO:teuthology.orchestra.run.vm02.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:26:53.460 INFO:teuthology.orchestra.run.vm02.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:26:53.460 INFO:teuthology.orchestra.run.vm02.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:26:53.460 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:26:53.460 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T13:26:53.616 INFO:teuthology.orchestra.run.vm02.stdout:No match for argument: libcephfs-devel 2026-03-10T13:26:53.616 INFO:teuthology.orchestra.run.vm02.stderr:No packages marked for removal. 2026-03-10T13:26:53.619 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T13:26:53.620 INFO:teuthology.orchestra.run.vm02.stdout:Nothing to do. 2026-03-10T13:26:53.620 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T13:26:53.795 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T13:26:53.797 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T13:26:53.797 INFO:teuthology.orchestra.run.vm02.stdout: Package Arch Version Repository Size 2026-03-10T13:26:53.797 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T13:26:53.797 INFO:teuthology.orchestra.run.vm02.stdout:Removing: 2026-03-10T13:26:53.797 INFO:teuthology.orchestra.run.vm02.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 12 M 2026-03-10T13:26:53.797 INFO:teuthology.orchestra.run.vm02.stdout:Removing dependent packages: 2026-03-10T13:26:53.797 INFO:teuthology.orchestra.run.vm02.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-10T13:26:53.797 INFO:teuthology.orchestra.run.vm02.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-10T13:26:53.797 INFO:teuthology.orchestra.run.vm02.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 265 k 2026-03-10T13:26:53.797 INFO:teuthology.orchestra.run.vm02.stdout: qemu-kvm-block-rbd x86_64 17:10.1.0-15.el9 @appstream 37 k 2026-03-10T13:26:53.797 INFO:teuthology.orchestra.run.vm02.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 227 k 2026-03-10T13:26:53.797 INFO:teuthology.orchestra.run.vm02.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 490 k 2026-03-10T13:26:53.797 INFO:teuthology.orchestra.run.vm02.stdout:Removing unused dependencies: 2026-03-10T13:26:53.797 INFO:teuthology.orchestra.run.vm02.stdout: boost-program-options x86_64 1.75.0-13.el9 @appstream 276 k 2026-03-10T13:26:53.797 INFO:teuthology.orchestra.run.vm02.stdout: libarrow x86_64 9.0.0-15.el9 @epel 18 M 2026-03-10T13:26:53.797 INFO:teuthology.orchestra.run.vm02.stdout: libarrow-doc noarch 9.0.0-15.el9 @epel 122 k 2026-03-10T13:26:53.797 INFO:teuthology.orchestra.run.vm02.stdout: libnbd x86_64 1.20.3-4.el9 @appstream 453 k 2026-03-10T13:26:53.797 INFO:teuthology.orchestra.run.vm02.stdout: libpmemobj x86_64 1.12.1-1.el9 @appstream 383 k 2026-03-10T13:26:53.797 INFO:teuthology.orchestra.run.vm02.stdout: librabbitmq x86_64 0.11.0-7.el9 @appstream 102 k 2026-03-10T13:26:53.797 INFO:teuthology.orchestra.run.vm02.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-10T13:26:53.797 INFO:teuthology.orchestra.run.vm02.stdout: librdkafka x86_64 1.6.1-102.el9 @appstream 2.0 M 2026-03-10T13:26:53.797 INFO:teuthology.orchestra.run.vm02.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 19 M 2026-03-10T13:26:53.797 INFO:teuthology.orchestra.run.vm02.stdout: lttng-ust x86_64 2.12.0-6.el9 @appstream 1.0 M 2026-03-10T13:26:53.797 INFO:teuthology.orchestra.run.vm02.stdout: parquet-libs x86_64 9.0.0-15.el9 @epel 2.8 M 2026-03-10T13:26:53.797 INFO:teuthology.orchestra.run.vm02.stdout: re2 x86_64 1:20211101-20.el9 @epel 472 k 2026-03-10T13:26:53.797 INFO:teuthology.orchestra.run.vm02.stdout: thrift x86_64 0.15.0-4.el9 @epel 4.8 M 2026-03-10T13:26:53.797 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:26:53.797 INFO:teuthology.orchestra.run.vm02.stdout:Transaction Summary 2026-03-10T13:26:53.797 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T13:26:53.797 INFO:teuthology.orchestra.run.vm02.stdout:Remove 20 Packages 2026-03-10T13:26:53.797 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:26:53.797 INFO:teuthology.orchestra.run.vm02.stdout:Freed space: 79 M 2026-03-10T13:26:53.797 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction check 2026-03-10T13:26:53.801 INFO:teuthology.orchestra.run.vm02.stdout:Transaction check succeeded. 2026-03-10T13:26:53.801 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction test 2026-03-10T13:26:53.822 INFO:teuthology.orchestra.run.vm02.stdout:Transaction test succeeded. 2026-03-10T13:26:53.822 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction 2026-03-10T13:26:53.861 INFO:teuthology.orchestra.run.vm02.stdout: Preparing : 1/1 2026-03-10T13:26:53.863 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 1/20 2026-03-10T13:26:53.865 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2/20 2026-03-10T13:26:53.868 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 3/20 2026-03-10T13:26:53.868 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-10T13:26:53.880 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-10T13:26:53.882 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : parquet-libs-9.0.0-15.el9.x86_64 5/20 2026-03-10T13:26:53.884 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 6/20 2026-03-10T13:26:53.885 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-10T13:26:53.887 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 8/20 2026-03-10T13:26:53.889 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : libarrow-doc-9.0.0-15.el9.noarch 9/20 2026-03-10T13:26:53.889 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T13:26:53.903 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T13:26:53.903 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-10T13:26:53.903 INFO:teuthology.orchestra.run.vm02.stdout:warning: file /etc/ceph: remove failed: No such file or directory 2026-03-10T13:26:53.903 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:26:53.917 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-10T13:26:53.920 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : libarrow-9.0.0-15.el9.x86_64 12/20 2026-03-10T13:26:53.924 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : re2-1:20211101-20.el9.x86_64 13/20 2026-03-10T13:26:53.928 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : lttng-ust-2.12.0-6.el9.x86_64 14/20 2026-03-10T13:26:53.932 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : thrift-0.15.0-4.el9.x86_64 15/20 2026-03-10T13:26:53.934 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : libnbd-1.20.3-4.el9.x86_64 16/20 2026-03-10T13:26:53.936 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : libpmemobj-1.12.1-1.el9.x86_64 17/20 2026-03-10T13:26:53.938 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : boost-program-options-1.75.0-13.el9.x86_64 18/20 2026-03-10T13:26:53.940 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : librabbitmq-0.11.0-7.el9.x86_64 19/20 2026-03-10T13:26:53.953 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-10T13:26:54.009 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-10T13:26:54.009 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 1/20 2026-03-10T13:26:54.009 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 2/20 2026-03-10T13:26:54.009 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 3/20 2026-03-10T13:26:54.009 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 4/20 2026-03-10T13:26:54.009 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 5/20 2026-03-10T13:26:54.009 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 6/20 2026-03-10T13:26:54.009 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-10T13:26:54.009 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 8/20 2026-03-10T13:26:54.009 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 9/20 2026-03-10T13:26:54.009 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T13:26:54.009 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 11/20 2026-03-10T13:26:54.009 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 12/20 2026-03-10T13:26:54.009 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 13/20 2026-03-10T13:26:54.009 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 14/20 2026-03-10T13:26:54.009 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 15/20 2026-03-10T13:26:54.009 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 16/20 2026-03-10T13:26:54.009 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 17/20 2026-03-10T13:26:54.009 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 18/20 2026-03-10T13:26:54.009 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : re2-1:20211101-20.el9.x86_64 19/20 2026-03-10T13:26:54.049 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 20/20 2026-03-10T13:26:54.049 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:26:54.049 INFO:teuthology.orchestra.run.vm02.stdout:Removed: 2026-03-10T13:26:54.049 INFO:teuthology.orchestra.run.vm02.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-10T13:26:54.049 INFO:teuthology.orchestra.run.vm02.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-10T13:26:54.049 INFO:teuthology.orchestra.run.vm02.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-10T13:26:54.049 INFO:teuthology.orchestra.run.vm02.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-10T13:26:54.049 INFO:teuthology.orchestra.run.vm02.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-10T13:26:54.050 INFO:teuthology.orchestra.run.vm02.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-10T13:26:54.050 INFO:teuthology.orchestra.run.vm02.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:26:54.050 INFO:teuthology.orchestra.run.vm02.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:26:54.050 INFO:teuthology.orchestra.run.vm02.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-10T13:26:54.050 INFO:teuthology.orchestra.run.vm02.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:26:54.050 INFO:teuthology.orchestra.run.vm02.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-10T13:26:54.050 INFO:teuthology.orchestra.run.vm02.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-10T13:26:54.050 INFO:teuthology.orchestra.run.vm02.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:26:54.050 INFO:teuthology.orchestra.run.vm02.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:26:54.050 INFO:teuthology.orchestra.run.vm02.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:26:54.050 INFO:teuthology.orchestra.run.vm02.stdout: qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 2026-03-10T13:26:54.050 INFO:teuthology.orchestra.run.vm02.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:26:54.050 INFO:teuthology.orchestra.run.vm02.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:26:54.050 INFO:teuthology.orchestra.run.vm02.stdout: re2-1:20211101-20.el9.x86_64 2026-03-10T13:26:54.050 INFO:teuthology.orchestra.run.vm02.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-10T13:26:54.050 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T13:26:54.050 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T13:26:54.255 INFO:teuthology.orchestra.run.vm02.stdout:No match for argument: librbd1 2026-03-10T13:26:54.256 INFO:teuthology.orchestra.run.vm02.stderr:No packages marked for removal. 2026-03-10T13:26:54.258 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T13:26:54.259 INFO:teuthology.orchestra.run.vm02.stdout:Nothing to do. 2026-03-10T13:26:54.259 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T13:26:54.444 INFO:teuthology.orchestra.run.vm02.stdout:No match for argument: python3-rados 2026-03-10T13:26:54.444 INFO:teuthology.orchestra.run.vm02.stderr:No packages marked for removal. 2026-03-10T13:26:54.446 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T13:26:54.447 INFO:teuthology.orchestra.run.vm02.stdout:Nothing to do. 2026-03-10T13:26:54.447 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T13:26:54.608 INFO:teuthology.orchestra.run.vm02.stdout:No match for argument: python3-rgw 2026-03-10T13:26:54.608 INFO:teuthology.orchestra.run.vm02.stderr:No packages marked for removal. 2026-03-10T13:26:54.610 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T13:26:54.611 INFO:teuthology.orchestra.run.vm02.stdout:Nothing to do. 2026-03-10T13:26:54.611 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T13:26:54.768 INFO:teuthology.orchestra.run.vm02.stdout:No match for argument: python3-cephfs 2026-03-10T13:26:54.768 INFO:teuthology.orchestra.run.vm02.stderr:No packages marked for removal. 2026-03-10T13:26:54.770 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T13:26:54.771 INFO:teuthology.orchestra.run.vm02.stdout:Nothing to do. 2026-03-10T13:26:54.771 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T13:26:54.928 INFO:teuthology.orchestra.run.vm02.stdout:No match for argument: python3-rbd 2026-03-10T13:26:54.928 INFO:teuthology.orchestra.run.vm02.stderr:No packages marked for removal. 2026-03-10T13:26:54.930 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T13:26:54.931 INFO:teuthology.orchestra.run.vm02.stdout:Nothing to do. 2026-03-10T13:26:54.931 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T13:26:55.094 INFO:teuthology.orchestra.run.vm02.stdout:No match for argument: rbd-fuse 2026-03-10T13:26:55.094 INFO:teuthology.orchestra.run.vm02.stderr:No packages marked for removal. 2026-03-10T13:26:55.096 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T13:26:55.096 INFO:teuthology.orchestra.run.vm02.stdout:Nothing to do. 2026-03-10T13:26:55.096 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T13:26:55.255 INFO:teuthology.orchestra.run.vm02.stdout:No match for argument: rbd-mirror 2026-03-10T13:26:55.255 INFO:teuthology.orchestra.run.vm02.stderr:No packages marked for removal. 2026-03-10T13:26:55.257 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T13:26:55.257 INFO:teuthology.orchestra.run.vm02.stdout:Nothing to do. 2026-03-10T13:26:55.257 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T13:26:55.422 INFO:teuthology.orchestra.run.vm02.stdout:No match for argument: rbd-nbd 2026-03-10T13:26:55.423 INFO:teuthology.orchestra.run.vm02.stderr:No packages marked for removal. 2026-03-10T13:26:55.425 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T13:26:55.425 INFO:teuthology.orchestra.run.vm02.stdout:Nothing to do. 2026-03-10T13:26:55.425 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T13:26:55.448 DEBUG:teuthology.orchestra.run.vm02:> sudo yum clean all 2026-03-10T13:26:55.582 INFO:teuthology.orchestra.run.vm02.stdout:56 files removed 2026-03-10T13:26:55.607 DEBUG:teuthology.orchestra.run.vm02:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-10T13:26:55.631 DEBUG:teuthology.orchestra.run.vm02:> sudo yum clean expire-cache 2026-03-10T13:26:55.793 INFO:teuthology.orchestra.run.vm02.stdout:Cache was expired 2026-03-10T13:26:55.793 INFO:teuthology.orchestra.run.vm02.stdout:0 files removed 2026-03-10T13:26:55.820 DEBUG:teuthology.parallel:result is None 2026-03-10T13:26:55.820 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm02.local 2026-03-10T13:26:55.820 DEBUG:teuthology.orchestra.run.vm02:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-10T13:26:55.848 DEBUG:teuthology.orchestra.run.vm02:> sudo mv -f /etc/yum/pluginconf.d/priorities.conf.orig /etc/yum/pluginconf.d/priorities.conf 2026-03-10T13:26:55.915 DEBUG:teuthology.parallel:result is None 2026-03-10T13:26:55.915 DEBUG:teuthology.run_tasks:Unwinding manager clock 2026-03-10T13:26:55.917 INFO:teuthology.task.clock:Checking final clock skew... 2026-03-10T13:26:55.917 DEBUG:teuthology.orchestra.run.vm02:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T13:26:55.971 INFO:teuthology.orchestra.run.vm02.stderr:bash: line 1: ntpq: command not found 2026-03-10T13:26:55.975 INFO:teuthology.orchestra.run.vm02.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T13:26:55.975 INFO:teuthology.orchestra.run.vm02.stdout:=============================================================================== 2026-03-10T13:26:55.975 INFO:teuthology.orchestra.run.vm02.stdout:^+ www.h4x-gamers.top 2 6 377 36 -208us[ -204us] +/- 41ms 2026-03-10T13:26:55.975 INFO:teuthology.orchestra.run.vm02.stdout:^* node-4.infogral.is 2 6 377 35 -208us[ -204us] +/- 14ms 2026-03-10T13:26:55.975 INFO:teuthology.orchestra.run.vm02.stdout:^+ ntp1.uni-ulm.de 2 6 377 35 +836us[ +836us] +/- 15ms 2026-03-10T13:26:55.975 INFO:teuthology.orchestra.run.vm02.stdout:^+ node-3.infogral.is 2 6 377 38 -331us[ -327us] +/- 15ms 2026-03-10T13:26:55.976 DEBUG:teuthology.run_tasks:Unwinding manager ansible.cephlab 2026-03-10T13:26:55.978 INFO:teuthology.task.ansible:Skipping ansible cleanup... 2026-03-10T13:26:55.978 DEBUG:teuthology.run_tasks:Unwinding manager selinux 2026-03-10T13:26:55.980 DEBUG:teuthology.run_tasks:Unwinding manager pcp 2026-03-10T13:26:55.982 DEBUG:teuthology.run_tasks:Unwinding manager internal.timer 2026-03-10T13:26:55.984 INFO:teuthology.task.internal:Duration was 564.949174 seconds 2026-03-10T13:26:55.984 DEBUG:teuthology.run_tasks:Unwinding manager internal.syslog 2026-03-10T13:26:55.986 INFO:teuthology.task.internal.syslog:Shutting down syslog monitoring... 2026-03-10T13:26:55.986 DEBUG:teuthology.orchestra.run.vm02:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-10T13:26:56.056 INFO:teuthology.orchestra.run.vm02.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T13:26:56.370 INFO:teuthology.task.internal.syslog:Checking logs for errors... 2026-03-10T13:26:56.370 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm02.local 2026-03-10T13:26:56.370 DEBUG:teuthology.orchestra.run.vm02:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-10T13:26:56.434 INFO:teuthology.task.internal.syslog:Gathering journactl... 2026-03-10T13:26:56.434 DEBUG:teuthology.orchestra.run.vm02:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T13:26:56.875 INFO:teuthology.task.internal.syslog:Compressing syslogs... 2026-03-10T13:26:56.875 DEBUG:teuthology.orchestra.run.vm02:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T13:26:56.901 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T13:26:56.901 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T13:26:56.902 INFO:teuthology.orchestra.run.vm02.stderr:gzip/home/ubuntu/cephtest/archive/syslog/kern.log: -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T13:26:56.902 INFO:teuthology.orchestra.run.vm02.stderr: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-10T13:26:56.902 INFO:teuthology.orchestra.run.vm02.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-10T13:26:57.050 INFO:teuthology.orchestra.run.vm02.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 98.3% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-10T13:26:57.052 DEBUG:teuthology.run_tasks:Unwinding manager internal.sudo 2026-03-10T13:26:57.054 INFO:teuthology.task.internal:Restoring /etc/sudoers... 2026-03-10T13:26:57.054 DEBUG:teuthology.orchestra.run.vm02:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-10T13:26:57.120 DEBUG:teuthology.run_tasks:Unwinding manager internal.coredump 2026-03-10T13:26:57.123 DEBUG:teuthology.orchestra.run.vm02:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-10T13:26:57.188 INFO:teuthology.orchestra.run.vm02.stdout:kernel.core_pattern = core 2026-03-10T13:26:57.201 DEBUG:teuthology.orchestra.run.vm02:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-10T13:26:57.256 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T13:26:57.256 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive 2026-03-10T13:26:57.259 INFO:teuthology.task.internal:Transferring archived files... 2026-03-10T13:26:57.259 DEBUG:teuthology.misc:Transferring archived files from vm02:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1045/remote/vm02 2026-03-10T13:26:57.260 DEBUG:teuthology.orchestra.run.vm02:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-10T13:26:57.324 INFO:teuthology.task.internal:Removing archive directory... 2026-03-10T13:26:57.324 DEBUG:teuthology.orchestra.run.vm02:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-10T13:26:57.379 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive_upload 2026-03-10T13:26:57.382 INFO:teuthology.task.internal:Not uploading archives. 2026-03-10T13:26:57.382 DEBUG:teuthology.run_tasks:Unwinding manager internal.base 2026-03-10T13:26:57.384 INFO:teuthology.task.internal:Tidying up after the test... 2026-03-10T13:26:57.384 DEBUG:teuthology.orchestra.run.vm02:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-10T13:26:57.436 INFO:teuthology.orchestra.run.vm02.stdout: 8532141 0 drwxr-xr-x 2 ubuntu ubuntu 6 Mar 10 13:26 /home/ubuntu/cephtest 2026-03-10T13:26:57.437 DEBUG:teuthology.run_tasks:Unwinding manager console_log 2026-03-10T13:26:57.442 INFO:teuthology.run:Summary data: description: orch/cephadm/no-agent-workunits/{0-distro/centos_9.stream mon_election/classic task/test_cephadm_timeout} duration: 564.9491741657257 flavor: default owner: kyr success: true 2026-03-10T13:26:57.442 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-10T13:26:57.461 INFO:teuthology.run:pass