2026-03-10T09:09:19.316 INFO:root:teuthology version: 1.2.4.dev6+g1c580df7a 2026-03-10T09:09:19.320 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-10T09:09:19.337 INFO:teuthology.run:Config: archive_path: /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/977 branch: squid description: orch/cephadm/workunits/{0-distro/ubuntu_22.04 agent/off mon_election/classic task/test_iscsi_container/{centos_9.stream test_iscsi_container}} email: null first_in_suite: false flavor: default job_id: '977' ktype: distro last_in_suite: false machine_type: vps name: kyr-2026-03-10_01:00:38-orch-squid-none-default-vps no_nested_subset: false os_type: centos os_version: 9.stream overrides: admin_socket: branch: squid ansible.cephlab: branch: main skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: timezone: UTC ceph: conf: global: mon election default strategy: 1 mgr: debug mgr: 20 debug ms: 1 mgr/cephadm/use_agent: false mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: debug ms: 1 debug osd: 20 osd mclock iops capacity threshold hdd: 49000 flavor: default log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) - CEPHADM_FAILED_DAEMON log-only-match: - CEPHADM_ sha1: e911bdebe5c8faa3800735d1568fcdca65db60df ceph-deploy: conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: {} install: ceph: flavor: default sha1: e911bdebe5c8faa3800735d1568fcdca65db60df extra_system_packages: deb: - python3-xmltodict - python3-jmespath rpm: - bzip2 - perl-Test-Harness - python3-xmltodict - python3-jmespath selinux: allowlist: - scontext=system_u:system_r:logrotate_t:s0 - scontext=system_u:system_r:getty_t:s0 workunit: branch: tt-squid sha1: 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b owner: kyr priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - host.a - osd.0 - osd.1 - osd.2 - mon.a - mgr.a - client.0 seed: 8043 sha1: e911bdebe5c8faa3800735d1568fcdca65db60df sleep_before_teardown: 0 subset: 1/64 suite: orch suite_branch: tt-squid suite_path: /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa suite_relpath: qa suite_repo: https://github.com/kshtsk/ceph.git suite_sha1: 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b targets: vm09.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEkeoma1maBlElsfRTDE+DfIInn3Nc3Cv9RsWhO33/ycobDbsjm3JoTOQn8kgB1hpj5m28NNLKiLZ+yX9kEuP00= tasks: - install: null - cephadm: null - cephadm.shell: host.a: - ceph osd pool create foo - rbd pool init foo - ceph orch apply iscsi foo u p - workunit: clients: client.0: - cephadm/test_iscsi_pids_limit.sh - cephadm/test_iscsi_etc_hosts.sh - cephadm/test_iscsi_setup.sh teuthology: fragments_dropped: [] meta: {} postmerge: [] teuthology_branch: clyso-debian-13 teuthology_repo: https://github.com/clyso/teuthology teuthology_sha1: 1c580df7a9c7c2aadc272da296344fd99f27c444 timestamp: 2026-03-10_01:00:38 tube: vps user: kyr verbose: false worker_log: /home/teuthos/.teuthology/dispatcher/dispatcher.vps.611473 2026-03-10T09:09:19.337 INFO:teuthology.run:suite_path is set to /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa; will attempt to use it 2026-03-10T09:09:19.338 INFO:teuthology.run:Found tasks at /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks 2026-03-10T09:09:19.338 INFO:teuthology.run_tasks:Running task internal.check_packages... 2026-03-10T09:09:19.338 INFO:teuthology.task.internal:Checking packages... 2026-03-10T09:09:19.338 INFO:teuthology.task.internal:Checking packages for os_type 'centos', flavor 'default' and ceph hash 'e911bdebe5c8faa3800735d1568fcdca65db60df' 2026-03-10T09:09:19.338 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-10T09:09:19.338 INFO:teuthology.packaging:ref: None 2026-03-10T09:09:19.338 INFO:teuthology.packaging:tag: None 2026-03-10T09:09:19.338 INFO:teuthology.packaging:branch: squid 2026-03-10T09:09:19.338 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T09:09:19.338 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&ref=squid 2026-03-10T09:09:20.048 INFO:teuthology.task.internal:Found packages for ceph version 19.2.3-678.ge911bdeb 2026-03-10T09:09:20.049 INFO:teuthology.run_tasks:Running task internal.buildpackages_prep... 2026-03-10T09:09:20.050 INFO:teuthology.task.internal:no buildpackages task found 2026-03-10T09:09:20.050 INFO:teuthology.run_tasks:Running task internal.save_config... 2026-03-10T09:09:20.050 INFO:teuthology.task.internal:Saving configuration 2026-03-10T09:09:20.054 INFO:teuthology.run_tasks:Running task internal.check_lock... 2026-03-10T09:09:20.054 INFO:teuthology.task.internal.check_lock:Checking locks... 2026-03-10T09:09:20.061 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm09.local', 'description': '/archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/977', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'centos', 'os_version': '9.stream', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-10 09:08:43.061873', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:09', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEkeoma1maBlElsfRTDE+DfIInn3Nc3Cv9RsWhO33/ycobDbsjm3JoTOQn8kgB1hpj5m28NNLKiLZ+yX9kEuP00='} 2026-03-10T09:09:20.061 INFO:teuthology.run_tasks:Running task internal.add_remotes... 2026-03-10T09:09:20.062 INFO:teuthology.task.internal:roles: ubuntu@vm09.local - ['host.a', 'osd.0', 'osd.1', 'osd.2', 'mon.a', 'mgr.a', 'client.0'] 2026-03-10T09:09:20.062 INFO:teuthology.run_tasks:Running task console_log... 2026-03-10T09:09:20.068 DEBUG:teuthology.task.console_log:vm09 does not support IPMI; excluding 2026-03-10T09:09:20.068 DEBUG:teuthology.exit:Installing handler: Handler(exiter=, func=.kill_console_loggers at 0x7fa04e486170>, signals=[15]) 2026-03-10T09:09:20.068 INFO:teuthology.run_tasks:Running task internal.connect... 2026-03-10T09:09:20.069 INFO:teuthology.task.internal:Opening connections... 2026-03-10T09:09:20.069 DEBUG:teuthology.task.internal:connecting to ubuntu@vm09.local 2026-03-10T09:09:20.070 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm09.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T09:09:20.128 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-03-10T09:09:20.129 DEBUG:teuthology.orchestra.run.vm09:> uname -m 2026-03-10T09:09:20.285 INFO:teuthology.orchestra.run.vm09.stdout:x86_64 2026-03-10T09:09:20.285 DEBUG:teuthology.orchestra.run.vm09:> cat /etc/os-release 2026-03-10T09:09:20.340 INFO:teuthology.orchestra.run.vm09.stdout:NAME="CentOS Stream" 2026-03-10T09:09:20.340 INFO:teuthology.orchestra.run.vm09.stdout:VERSION="9" 2026-03-10T09:09:20.340 INFO:teuthology.orchestra.run.vm09.stdout:ID="centos" 2026-03-10T09:09:20.340 INFO:teuthology.orchestra.run.vm09.stdout:ID_LIKE="rhel fedora" 2026-03-10T09:09:20.340 INFO:teuthology.orchestra.run.vm09.stdout:VERSION_ID="9" 2026-03-10T09:09:20.340 INFO:teuthology.orchestra.run.vm09.stdout:PLATFORM_ID="platform:el9" 2026-03-10T09:09:20.340 INFO:teuthology.orchestra.run.vm09.stdout:PRETTY_NAME="CentOS Stream 9" 2026-03-10T09:09:20.340 INFO:teuthology.orchestra.run.vm09.stdout:ANSI_COLOR="0;31" 2026-03-10T09:09:20.340 INFO:teuthology.orchestra.run.vm09.stdout:LOGO="fedora-logo-icon" 2026-03-10T09:09:20.340 INFO:teuthology.orchestra.run.vm09.stdout:CPE_NAME="cpe:/o:centos:centos:9" 2026-03-10T09:09:20.340 INFO:teuthology.orchestra.run.vm09.stdout:HOME_URL="https://centos.org/" 2026-03-10T09:09:20.340 INFO:teuthology.orchestra.run.vm09.stdout:BUG_REPORT_URL="https://issues.redhat.com/" 2026-03-10T09:09:20.340 INFO:teuthology.orchestra.run.vm09.stdout:REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9" 2026-03-10T09:09:20.340 INFO:teuthology.orchestra.run.vm09.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream" 2026-03-10T09:09:20.341 INFO:teuthology.lock.ops:Updating vm09.local on lock server 2026-03-10T09:09:20.359 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-03-10T09:09:20.381 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-03-10T09:09:20.382 INFO:teuthology.task.internal:Checking for old test directory... 2026-03-10T09:09:20.382 DEBUG:teuthology.orchestra.run.vm09:> test '!' -e /home/ubuntu/cephtest 2026-03-10T09:09:20.396 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-03-10T09:09:20.401 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-03-10T09:09:20.401 DEBUG:teuthology.orchestra.run.vm09:> test -z $(ls -A /var/lib/ceph) 2026-03-10T09:09:20.452 INFO:teuthology.orchestra.run.vm09.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-10T09:09:20.453 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-03-10T09:09:20.460 DEBUG:teuthology.orchestra.run.vm09:> test -e /ceph-qa-ready 2026-03-10T09:09:20.507 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T09:09:20.691 INFO:teuthology.run_tasks:Running task internal.base... 2026-03-10T09:09:20.693 INFO:teuthology.task.internal:Creating test directory... 2026-03-10T09:09:20.693 DEBUG:teuthology.orchestra.run.vm09:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-10T09:09:20.708 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-03-10T09:09:20.709 INFO:teuthology.run_tasks:Running task internal.archive... 2026-03-10T09:09:20.710 INFO:teuthology.task.internal:Creating archive directory... 2026-03-10T09:09:20.710 DEBUG:teuthology.orchestra.run.vm09:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-10T09:09:20.766 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-03-10T09:09:20.767 INFO:teuthology.task.internal:Enabling coredump saving... 2026-03-10T09:09:20.767 DEBUG:teuthology.orchestra.run.vm09:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-10T09:09:20.818 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T09:09:20.818 DEBUG:teuthology.orchestra.run.vm09:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-10T09:09:20.882 INFO:teuthology.orchestra.run.vm09.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T09:09:20.890 INFO:teuthology.orchestra.run.vm09.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T09:09:20.891 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-03-10T09:09:20.892 INFO:teuthology.task.internal:Configuring sudo... 2026-03-10T09:09:20.892 DEBUG:teuthology.orchestra.run.vm09:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-10T09:09:20.953 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-03-10T09:09:20.955 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-03-10T09:09:20.955 DEBUG:teuthology.orchestra.run.vm09:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-10T09:09:21.008 DEBUG:teuthology.orchestra.run.vm09:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T09:09:21.069 DEBUG:teuthology.orchestra.run.vm09:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T09:09:21.125 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T09:09:21.126 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-10T09:09:21.186 DEBUG:teuthology.orchestra.run.vm09:> sudo service rsyslog restart 2026-03-10T09:09:21.252 INFO:teuthology.orchestra.run.vm09.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T09:09:21.672 INFO:teuthology.run_tasks:Running task internal.timer... 2026-03-10T09:09:21.673 INFO:teuthology.task.internal:Starting timer... 2026-03-10T09:09:21.674 INFO:teuthology.run_tasks:Running task pcp... 2026-03-10T09:09:21.677 INFO:teuthology.run_tasks:Running task selinux... 2026-03-10T09:09:21.679 DEBUG:teuthology.task:Applying overrides for task selinux: {'allowlist': ['scontext=system_u:system_r:logrotate_t:s0', 'scontext=system_u:system_r:getty_t:s0']} 2026-03-10T09:09:21.679 INFO:teuthology.task.selinux:Excluding vm09: VMs are not yet supported 2026-03-10T09:09:21.679 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-03-10T09:09:21.679 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-03-10T09:09:21.679 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-03-10T09:09:21.679 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-03-10T09:09:21.680 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'timezone': 'UTC'}} 2026-03-10T09:09:21.681 DEBUG:teuthology.repo_utils:Setting repo remote to https://github.com/ceph/ceph-cm-ansible.git 2026-03-10T09:09:21.682 INFO:teuthology.repo_utils:Fetching github.com_ceph_ceph-cm-ansible_main from origin 2026-03-10T09:09:22.289 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main to origin/main 2026-03-10T09:09:22.294 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-03-10T09:09:22.294 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "timezone": "UTC"}' -i /tmp/teuth_ansible_inventoryfcbwp2zp --limit vm09.local /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-03-10T09:10:58.324 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm09.local')] 2026-03-10T09:10:58.325 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm09.local' 2026-03-10T09:10:58.325 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm09.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T09:10:58.395 DEBUG:teuthology.orchestra.run.vm09:> true 2026-03-10T09:10:58.479 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm09.local' 2026-03-10T09:10:58.479 INFO:teuthology.run_tasks:Running task clock... 2026-03-10T09:10:58.512 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-03-10T09:10:58.512 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-10T09:10:58.512 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T09:10:58.561 INFO:teuthology.orchestra.run.vm09.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-03-10T09:10:58.578 INFO:teuthology.orchestra.run.vm09.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-03-10T09:10:58.611 INFO:teuthology.orchestra.run.vm09.stderr:sudo: ntpd: command not found 2026-03-10T09:10:58.624 INFO:teuthology.orchestra.run.vm09.stdout:506 Cannot talk to daemon 2026-03-10T09:10:58.642 INFO:teuthology.orchestra.run.vm09.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-03-10T09:10:58.660 INFO:teuthology.orchestra.run.vm09.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-03-10T09:10:58.714 INFO:teuthology.orchestra.run.vm09.stderr:bash: line 1: ntpq: command not found 2026-03-10T09:10:58.736 INFO:teuthology.orchestra.run.vm09.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T09:10:58.736 INFO:teuthology.orchestra.run.vm09.stdout:=============================================================================== 2026-03-10T09:10:58.736 INFO:teuthology.orchestra.run.vm09.stdout:^? mailout04.fischl.online 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T09:10:58.736 INFO:teuthology.orchestra.run.vm09.stdout:^? ntp4.lwlcom.net 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T09:10:58.736 INFO:teuthology.orchestra.run.vm09.stdout:^? 141.84.43.73 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T09:10:58.736 INFO:teuthology.orchestra.run.vm09.stdout:^? ntp2.lwlcom.net 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T09:10:58.736 INFO:teuthology.run_tasks:Running task install... 2026-03-10T09:10:58.852 DEBUG:teuthology.task.install:project ceph 2026-03-10T09:10:58.852 DEBUG:teuthology.task.install:INSTALL overrides: {'ceph': {'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'}, 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-10T09:10:58.852 DEBUG:teuthology.task.install:config {'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-10T09:10:58.852 INFO:teuthology.task.install:Using flavor: default 2026-03-10T09:10:58.855 DEBUG:teuthology.task.install:Package list is: {'deb': ['ceph', 'cephadm', 'ceph-mds', 'ceph-mgr', 'ceph-common', 'ceph-fuse', 'ceph-test', 'ceph-volume', 'radosgw', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'libcephfs2', 'libcephfs-dev', 'librados2', 'librbd1', 'rbd-fuse'], 'rpm': ['ceph-radosgw', 'ceph-test', 'ceph', 'ceph-base', 'cephadm', 'ceph-immutable-object-cache', 'ceph-mgr', 'ceph-mgr-dashboard', 'ceph-mgr-diskprediction-local', 'ceph-mgr-rook', 'ceph-mgr-cephadm', 'ceph-fuse', 'ceph-volume', 'librados-devel', 'libcephfs2', 'libcephfs-devel', 'librados2', 'librbd1', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'rbd-fuse', 'rbd-mirror', 'rbd-nbd']} 2026-03-10T09:10:58.855 INFO:teuthology.task.install:extra packages: [] 2026-03-10T09:10:58.855 DEBUG:teuthology.task.install.rpm:_update_package_list_and_install: config is {'branch': None, 'cleanup': None, 'debuginfo': None, 'downgrade_packages': [], 'exclude_packages': [], 'extra_packages': [], 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}, 'extras': None, 'enable_coprs': [], 'flavor': 'default', 'install_ceph_packages': True, 'packages': {}, 'project': 'ceph', 'repos_only': False, 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'tag': None, 'wait_for_package': False} 2026-03-10T09:10:58.855 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T09:10:59.571 INFO:teuthology.task.install.rpm:Pulling from https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/ 2026-03-10T09:10:59.571 INFO:teuthology.task.install.rpm:Package version is 19.2.3-678.ge911bdeb 2026-03-10T09:11:00.065 INFO:teuthology.packaging:Writing yum repo: [ceph] name=ceph packages for $basearch baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/$basearch enabled=1 gpgcheck=0 type=rpm-md [ceph-noarch] name=ceph noarch packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/noarch enabled=1 gpgcheck=0 type=rpm-md [ceph-source] name=ceph source packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/SRPMS enabled=1 gpgcheck=0 type=rpm-md 2026-03-10T09:11:00.065 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T09:11:00.065 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/yum.repos.d/ceph.repo 2026-03-10T09:11:00.098 INFO:teuthology.task.install.rpm:Installing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd, bzip2, perl-Test-Harness, python3-xmltodict, python3-jmespath on remote rpm x86_64 2026-03-10T09:11:00.098 DEBUG:teuthology.orchestra.run.vm09:> if test -f /etc/yum.repos.d/ceph.repo ; then sudo sed -i -e ':a;N;$!ba;s/enabled=1\ngpg/enabled=1\npriority=1\ngpg/g' -e 's;ref/[a-zA-Z0-9_-]*/;sha1/e911bdebe5c8faa3800735d1568fcdca65db60df/;g' /etc/yum.repos.d/ceph.repo ; fi 2026-03-10T09:11:00.173 DEBUG:teuthology.orchestra.run.vm09:> sudo touch -a /etc/yum/pluginconf.d/priorities.conf ; test -e /etc/yum/pluginconf.d/priorities.conf.orig || sudo cp -af /etc/yum/pluginconf.d/priorities.conf /etc/yum/pluginconf.d/priorities.conf.orig 2026-03-10T09:11:00.261 DEBUG:teuthology.orchestra.run.vm09:> grep check_obsoletes /etc/yum/pluginconf.d/priorities.conf && sudo sed -i 's/check_obsoletes.*0/check_obsoletes = 1/g' /etc/yum/pluginconf.d/priorities.conf || echo 'check_obsoletes = 1' | sudo tee -a /etc/yum/pluginconf.d/priorities.conf 2026-03-10T09:11:00.295 INFO:teuthology.orchestra.run.vm09.stdout:check_obsoletes = 1 2026-03-10T09:11:00.297 DEBUG:teuthology.orchestra.run.vm09:> sudo yum clean all 2026-03-10T09:11:00.498 INFO:teuthology.orchestra.run.vm09.stdout:41 files removed 2026-03-10T09:11:00.524 DEBUG:teuthology.orchestra.run.vm09:> sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd bzip2 perl-Test-Harness python3-xmltodict python3-jmespath 2026-03-10T09:11:01.909 INFO:teuthology.orchestra.run.vm09.stdout:ceph packages for x86_64 70 kB/s | 84 kB 00:01 2026-03-10T09:11:02.888 INFO:teuthology.orchestra.run.vm09.stdout:ceph noarch packages 12 kB/s | 12 kB 00:00 2026-03-10T09:11:03.883 INFO:teuthology.orchestra.run.vm09.stdout:ceph source packages 1.9 kB/s | 1.9 kB 00:00 2026-03-10T09:11:06.104 INFO:teuthology.orchestra.run.vm09.stdout:CentOS Stream 9 - BaseOS 4.0 MB/s | 8.9 MB 00:02 2026-03-10T09:11:08.033 INFO:teuthology.orchestra.run.vm09.stdout:CentOS Stream 9 - AppStream 20 MB/s | 27 MB 00:01 2026-03-10T09:11:12.114 INFO:teuthology.orchestra.run.vm09.stdout:CentOS Stream 9 - CRB 6.3 MB/s | 8.0 MB 00:01 2026-03-10T09:11:13.553 INFO:teuthology.orchestra.run.vm09.stdout:CentOS Stream 9 - Extras packages 34 kB/s | 20 kB 00:00 2026-03-10T09:11:14.969 INFO:teuthology.orchestra.run.vm09.stdout:Extra Packages for Enterprise Linux 15 MB/s | 20 MB 00:01 2026-03-10T09:11:19.522 INFO:teuthology.orchestra.run.vm09.stdout:lab-extras 64 kB/s | 50 kB 00:00 2026-03-10T09:11:20.842 INFO:teuthology.orchestra.run.vm09.stdout:Package librados2-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-10T09:11:20.842 INFO:teuthology.orchestra.run.vm09.stdout:Package librbd1-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-10T09:11:20.845 INFO:teuthology.orchestra.run.vm09.stdout:Package bzip2-1.0.8-11.el9.x86_64 is already installed. 2026-03-10T09:11:20.846 INFO:teuthology.orchestra.run.vm09.stdout:Package perl-Test-Harness-1:3.42-461.el9.noarch is already installed. 2026-03-10T09:11:20.873 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T09:11:20.878 INFO:teuthology.orchestra.run.vm09.stdout:====================================================================================== 2026-03-10T09:11:20.878 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-10T09:11:20.878 INFO:teuthology.orchestra.run.vm09.stdout:====================================================================================== 2026-03-10T09:11:20.878 INFO:teuthology.orchestra.run.vm09.stdout:Installing: 2026-03-10T09:11:20.878 INFO:teuthology.orchestra.run.vm09.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 6.5 k 2026-03-10T09:11:20.878 INFO:teuthology.orchestra.run.vm09.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.5 M 2026-03-10T09:11:20.878 INFO:teuthology.orchestra.run.vm09.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.2 M 2026-03-10T09:11:20.878 INFO:teuthology.orchestra.run.vm09.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 145 k 2026-03-10T09:11:20.878 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.1 M 2026-03-10T09:11:20.878 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 150 k 2026-03-10T09:11:20.878 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 3.8 M 2026-03-10T09:11:20.878 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 7.4 M 2026-03-10T09:11:20.878 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 49 k 2026-03-10T09:11:20.878 INFO:teuthology.orchestra.run.vm09.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 11 M 2026-03-10T09:11:20.878 INFO:teuthology.orchestra.run.vm09.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 50 M 2026-03-10T09:11:20.878 INFO:teuthology.orchestra.run.vm09.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 299 k 2026-03-10T09:11:20.878 INFO:teuthology.orchestra.run.vm09.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 769 k 2026-03-10T09:11:20.878 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 34 k 2026-03-10T09:11:20.878 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.0 M 2026-03-10T09:11:20.878 INFO:teuthology.orchestra.run.vm09.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 127 k 2026-03-10T09:11:20.878 INFO:teuthology.orchestra.run.vm09.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 165 k 2026-03-10T09:11:20.878 INFO:teuthology.orchestra.run.vm09.stdout: python3-jmespath noarch 1.0.1-1.el9 appstream 48 k 2026-03-10T09:11:20.878 INFO:teuthology.orchestra.run.vm09.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 323 k 2026-03-10T09:11:20.878 INFO:teuthology.orchestra.run.vm09.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 303 k 2026-03-10T09:11:20.878 INFO:teuthology.orchestra.run.vm09.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 100 k 2026-03-10T09:11:20.878 INFO:teuthology.orchestra.run.vm09.stdout: python3-xmltodict noarch 0.12.0-15.el9 epel 22 k 2026-03-10T09:11:20.878 INFO:teuthology.orchestra.run.vm09.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 85 k 2026-03-10T09:11:20.878 INFO:teuthology.orchestra.run.vm09.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.1 M 2026-03-10T09:11:20.878 INFO:teuthology.orchestra.run.vm09.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 171 k 2026-03-10T09:11:20.878 INFO:teuthology.orchestra.run.vm09.stdout:Upgrading: 2026-03-10T09:11:20.878 INFO:teuthology.orchestra.run.vm09.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.4 M 2026-03-10T09:11:20.878 INFO:teuthology.orchestra.run.vm09.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.2 M 2026-03-10T09:11:20.878 INFO:teuthology.orchestra.run.vm09.stdout:Installing dependencies: 2026-03-10T09:11:20.878 INFO:teuthology.orchestra.run.vm09.stdout: abseil-cpp x86_64 20211102.0-4.el9 epel 551 k 2026-03-10T09:11:20.878 INFO:teuthology.orchestra.run.vm09.stdout: boost-program-options x86_64 1.75.0-13.el9 appstream 104 k 2026-03-10T09:11:20.878 INFO:teuthology.orchestra.run.vm09.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 22 M 2026-03-10T09:11:20.878 INFO:teuthology.orchestra.run.vm09.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 31 k 2026-03-10T09:11:20.878 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 2.4 M 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 253 k 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 4.7 M 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 17 M 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 17 k 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 25 k 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: cryptsetup x86_64 2.8.1-3.el9 baseos 351 k 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas x86_64 3.0.4-9.el9 appstream 30 k 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 appstream 3.0 M 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 appstream 15 k 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: gperftools-libs x86_64 2.9.1-3.el9 epel 308 k 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: grpc-data noarch 1.46.7-10.el9 epel 19 k 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: ledmon-libs x86_64 1.1.0-3.el9 baseos 40 k 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: libarrow x86_64 9.0.0-15.el9 epel 4.4 M 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: libarrow-doc noarch 9.0.0-15.el9 epel 25 k 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 163 k 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: libconfig x86_64 1.7.2-9.el9 baseos 72 k 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: libgfortran x86_64 11.5.0-14.el9 baseos 794 k 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: libnbd x86_64 1.20.3-4.el9 appstream 164 k 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: liboath x86_64 2.6.12-1.el9 epel 49 k 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: libpmemobj x86_64 1.12.1-1.el9 appstream 160 k 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: libquadmath x86_64 11.5.0-14.el9 baseos 184 k 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: librabbitmq x86_64 0.11.0-7.el9 appstream 45 k 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 503 k 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: librdkafka x86_64 1.6.1-102.el9 appstream 662 k 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.4 M 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: libstoragemgmt x86_64 1.10.1-1.el9 appstream 246 k 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: libunwind x86_64 1.6.2-1.el9 epel 67 k 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: libxslt x86_64 1.1.34-12.el9 appstream 233 k 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: lttng-ust x86_64 2.12.0-6.el9 appstream 292 k 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: lua x86_64 5.4.4-4.el9 appstream 188 k 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: lua-devel x86_64 5.4.4-4.el9 crb 22 k 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: luarocks noarch 3.9.2-5.el9 epel 151 k 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: mailcap noarch 2.1.49-5.el9 baseos 33 k 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: openblas x86_64 0.3.29-1.el9 appstream 42 k 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: openblas-openmp x86_64 0.3.29-1.el9 appstream 5.3 M 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: parquet-libs x86_64 9.0.0-15.el9 epel 838 k 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: pciutils x86_64 3.7.0-7.el9 baseos 93 k 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: protobuf x86_64 3.14.0-17.el9 appstream 1.0 M 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: protobuf-compiler x86_64 3.14.0-17.el9 crb 862 k 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: python3-asyncssh noarch 2.13.2-5.el9 epel 548 k 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: python3-autocommand noarch 2.2.2-8.el9 epel 29 k 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: python3-babel noarch 2.9.1-2.el9 appstream 6.0 M 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 epel 60 k 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: python3-bcrypt x86_64 3.2.2-1.el9 epel 43 k 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: python3-cachetools noarch 4.2.4-1.el9 epel 32 k 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 45 k 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 142 k 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: python3-certifi noarch 2023.05.07-4.el9 epel 14 k 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: python3-cffi x86_64 1.14.5-5.el9 baseos 253 k 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: python3-cheroot noarch 10.0.1-4.el9 epel 173 k 2026-03-10T09:11:20.879 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy noarch 18.6.1-2.el9 epel 358 k 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-cryptography x86_64 36.0.1-5.el9 baseos 1.2 M 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-devel x86_64 3.9.25-3.el9 appstream 244 k 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-google-auth noarch 1:2.45.0-1.el9 epel 254 k 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-grpcio x86_64 1.46.7-10.el9 epel 2.0 M 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 epel 144 k 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco noarch 8.2.1-3.el9 epel 11 k 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 epel 18 k 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 epel 23 k 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-context noarch 6.0.1-3.el9 epel 20 k 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 epel 19 k 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-text noarch 4.0.0-2.el9 epel 26 k 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-jinja2 noarch 2.11.3-8.el9 appstream 249 k 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 epel 1.0 M 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 appstream 177 k 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-logutils noarch 0.3.5-21.el9 epel 46 k 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-mako noarch 1.1.4-6.el9 appstream 172 k 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-markupsafe x86_64 1.1.1-12.el9 appstream 35 k 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-more-itertools noarch 8.12.0-2.el9 epel 79 k 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-natsort noarch 7.1.1-5.el9 epel 58 k 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy x86_64 1:1.23.5-2.el9 appstream 6.1 M 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 appstream 442 k 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-packaging noarch 20.9-5.el9 appstream 77 k 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-pecan noarch 1.4.2-3.el9 epel 272 k 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-ply noarch 3.11-14.el9 baseos 106 k 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-portend noarch 3.1.0-2.el9 epel 16 k 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-protobuf noarch 3.14.0-17.el9 appstream 267 k 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 epel 90 k 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1 noarch 0.4.8-7.el9 appstream 157 k 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 appstream 277 k 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-pycparser noarch 2.20-6.el9 baseos 135 k 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyparsing noarch 2.4.7-9.el9 baseos 150 k 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-repoze-lru noarch 0.7-16.el9 epel 31 k 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests noarch 2.25.1-10.el9 baseos 126 k 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 appstream 54 k 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-routes noarch 2.5.1-5.el9 epel 188 k 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-rsa noarch 4.9-2.el9 epel 59 k 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-scipy x86_64 1.9.3-2.el9 appstream 19 M 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-tempora noarch 5.0.0-2.el9 epel 36 k 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-toml noarch 0.10.2-6.el9 appstream 42 k 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-typing-extensions noarch 4.15.0-1.el9 epel 86 k 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-urllib3 noarch 1.26.5-7.el9 baseos 218 k 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-webob noarch 1.8.8-2.el9 epel 230 k 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-websocket-client noarch 1.2.3-2.el9 epel 90 k 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 epel 427 k 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-zc-lockfile noarch 2.0-10.el9 epel 20 k 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: qatlib x86_64 25.08.0-2.el9 appstream 240 k 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: qatzip-libs x86_64 1.3.1-1.el9 appstream 66 k 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: re2 x86_64 1:20211101-20.el9 epel 191 k 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: socat x86_64 1.7.4.1-8.el9 appstream 303 k 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: thrift x86_64 0.15.0-4.el9 epel 1.6 M 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: unzip x86_64 6.0-59.el9 baseos 182 k 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: xmlstarlet x86_64 1.6.1-20.el9 appstream 64 k 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: zip x86_64 3.0-35.el9 baseos 266 k 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout:Installing weak dependencies: 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: qatlib-service x86_64 25.08.0-2.el9 appstream 37 k 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout:====================================================================================== 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout:Install 135 Packages 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout:Upgrade 2 Packages 2026-03-10T09:11:20.880 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:11:20.881 INFO:teuthology.orchestra.run.vm09.stdout:Total download size: 210 M 2026-03-10T09:11:20.881 INFO:teuthology.orchestra.run.vm09.stdout:Downloading Packages: 2026-03-10T09:11:22.977 INFO:teuthology.orchestra.run.vm09.stdout:(1/137): ceph-19.2.3-678.ge911bdeb.el9.x86_64.r 14 kB/s | 6.5 kB 00:00 2026-03-10T09:11:25.885 INFO:teuthology.orchestra.run.vm09.stdout:(2/137): ceph-fuse-19.2.3-678.ge911bdeb.el9.x86 405 kB/s | 1.2 MB 00:02 2026-03-10T09:11:26.117 INFO:teuthology.orchestra.run.vm09.stdout:(3/137): ceph-immutable-object-cache-19.2.3-678 625 kB/s | 145 kB 00:00 2026-03-10T09:11:28.202 INFO:teuthology.orchestra.run.vm09.stdout:(4/137): ceph-mds-19.2.3-678.ge911bdeb.el9.x86_ 1.2 MB/s | 2.4 MB 00:02 2026-03-10T09:11:28.783 INFO:teuthology.orchestra.run.vm09.stdout:(5/137): ceph-mgr-19.2.3-678.ge911bdeb.el9.x86_ 1.9 MB/s | 1.1 MB 00:00 2026-03-10T09:11:28.917 INFO:teuthology.orchestra.run.vm09.stdout:(6/137): ceph-base-19.2.3-678.ge911bdeb.el9.x86 879 kB/s | 5.5 MB 00:06 2026-03-10T09:11:30.412 INFO:teuthology.orchestra.run.vm09.stdout:(7/137): ceph-mon-19.2.3-678.ge911bdeb.el9.x86_ 2.9 MB/s | 4.7 MB 00:01 2026-03-10T09:11:32.184 INFO:teuthology.orchestra.run.vm09.stdout:(8/137): ceph-radosgw-19.2.3-678.ge911bdeb.el9. 6.1 MB/s | 11 MB 00:01 2026-03-10T09:11:32.302 INFO:teuthology.orchestra.run.vm09.stdout:(9/137): ceph-selinux-19.2.3-678.ge911bdeb.el9. 212 kB/s | 25 kB 00:00 2026-03-10T09:11:32.511 INFO:teuthology.orchestra.run.vm09.stdout:(10/137): ceph-common-19.2.3-678.ge911bdeb.el9. 2.2 MB/s | 22 MB 00:10 2026-03-10T09:11:32.630 INFO:teuthology.orchestra.run.vm09.stdout:(11/137): libcephfs-devel-19.2.3-678.ge911bdeb. 284 kB/s | 34 kB 00:00 2026-03-10T09:11:32.717 INFO:teuthology.orchestra.run.vm09.stdout:(12/137): ceph-osd-19.2.3-678.ge911bdeb.el9.x86 4.5 MB/s | 17 MB 00:03 2026-03-10T09:11:32.836 INFO:teuthology.orchestra.run.vm09.stdout:(13/137): libcephsqlite-19.2.3-678.ge911bdeb.el 1.3 MB/s | 163 kB 00:00 2026-03-10T09:11:32.872 INFO:teuthology.orchestra.run.vm09.stdout:(14/137): libcephfs2-19.2.3-678.ge911bdeb.el9.x 4.0 MB/s | 1.0 MB 00:00 2026-03-10T09:11:32.954 INFO:teuthology.orchestra.run.vm09.stdout:(15/137): librados-devel-19.2.3-678.ge911bdeb.e 1.0 MB/s | 127 kB 00:00 2026-03-10T09:11:32.996 INFO:teuthology.orchestra.run.vm09.stdout:(16/137): libradosstriper1-19.2.3-678.ge911bdeb 4.0 MB/s | 503 kB 00:00 2026-03-10T09:11:33.116 INFO:teuthology.orchestra.run.vm09.stdout:(17/137): python3-ceph-argparse-19.2.3-678.ge91 376 kB/s | 45 kB 00:00 2026-03-10T09:11:33.236 INFO:teuthology.orchestra.run.vm09.stdout:(18/137): python3-ceph-common-19.2.3-678.ge911b 1.2 MB/s | 142 kB 00:00 2026-03-10T09:11:33.358 INFO:teuthology.orchestra.run.vm09.stdout:(19/137): python3-cephfs-19.2.3-678.ge911bdeb.e 1.3 MB/s | 165 kB 00:00 2026-03-10T09:11:33.480 INFO:teuthology.orchestra.run.vm09.stdout:(20/137): python3-rados-19.2.3-678.ge911bdeb.el 2.6 MB/s | 323 kB 00:00 2026-03-10T09:11:33.601 INFO:teuthology.orchestra.run.vm09.stdout:(21/137): python3-rbd-19.2.3-678.ge911bdeb.el9. 2.5 MB/s | 303 kB 00:00 2026-03-10T09:11:33.720 INFO:teuthology.orchestra.run.vm09.stdout:(22/137): python3-rgw-19.2.3-678.ge911bdeb.el9. 840 kB/s | 100 kB 00:00 2026-03-10T09:11:33.839 INFO:teuthology.orchestra.run.vm09.stdout:(23/137): rbd-fuse-19.2.3-678.ge911bdeb.el9.x86 718 kB/s | 85 kB 00:00 2026-03-10T09:11:34.017 INFO:teuthology.orchestra.run.vm09.stdout:(24/137): librgw2-19.2.3-678.ge911bdeb.el9.x86_ 5.1 MB/s | 5.4 MB 00:01 2026-03-10T09:11:34.136 INFO:teuthology.orchestra.run.vm09.stdout:(25/137): rbd-nbd-19.2.3-678.ge911bdeb.el9.x86_ 1.4 MB/s | 171 kB 00:00 2026-03-10T09:11:34.254 INFO:teuthology.orchestra.run.vm09.stdout:(26/137): ceph-grafana-dashboards-19.2.3-678.ge 266 kB/s | 31 kB 00:00 2026-03-10T09:11:34.373 INFO:teuthology.orchestra.run.vm09.stdout:(27/137): ceph-mgr-cephadm-19.2.3-678.ge911bdeb 1.2 MB/s | 150 kB 00:00 2026-03-10T09:11:34.666 INFO:teuthology.orchestra.run.vm09.stdout:(28/137): rbd-mirror-19.2.3-678.ge911bdeb.el9.x 3.8 MB/s | 3.1 MB 00:00 2026-03-10T09:11:35.086 INFO:teuthology.orchestra.run.vm09.stdout:(29/137): ceph-mgr-dashboard-19.2.3-678.ge911bd 5.3 MB/s | 3.8 MB 00:00 2026-03-10T09:11:35.313 INFO:teuthology.orchestra.run.vm09.stdout:(30/137): ceph-mgr-modules-core-19.2.3-678.ge91 1.1 MB/s | 253 kB 00:00 2026-03-10T09:11:35.432 INFO:teuthology.orchestra.run.vm09.stdout:(31/137): ceph-mgr-rook-19.2.3-678.ge911bdeb.el 415 kB/s | 49 kB 00:00 2026-03-10T09:11:35.549 INFO:teuthology.orchestra.run.vm09.stdout:(32/137): ceph-prometheus-alerts-19.2.3-678.ge9 143 kB/s | 17 kB 00:00 2026-03-10T09:11:35.670 INFO:teuthology.orchestra.run.vm09.stdout:(33/137): ceph-volume-19.2.3-678.ge911bdeb.el9. 2.4 MB/s | 299 kB 00:00 2026-03-10T09:11:35.906 INFO:teuthology.orchestra.run.vm09.stdout:(34/137): cephadm-19.2.3-678.ge911bdeb.el9.noar 3.2 MB/s | 769 kB 00:00 2026-03-10T09:11:36.065 INFO:teuthology.orchestra.run.vm09.stdout:(35/137): ceph-test-19.2.3-678.ge911bdeb.el9.x8 13 MB/s | 50 MB 00:03 2026-03-10T09:11:36.163 INFO:teuthology.orchestra.run.vm09.stdout:(36/137): ledmon-libs-1.1.0-3.el9.x86_64.rpm 410 kB/s | 40 kB 00:00 2026-03-10T09:11:36.233 INFO:teuthology.orchestra.run.vm09.stdout:(37/137): libconfig-1.7.2-9.el9.x86_64.rpm 1.0 MB/s | 72 kB 00:00 2026-03-10T09:11:36.246 INFO:teuthology.orchestra.run.vm09.stdout:(38/137): cryptsetup-2.8.1-3.el9.x86_64.rpm 1.0 MB/s | 351 kB 00:00 2026-03-10T09:11:36.292 INFO:teuthology.orchestra.run.vm09.stdout:(39/137): libquadmath-11.5.0-14.el9.x86_64.rpm 4.0 MB/s | 184 kB 00:00 2026-03-10T09:11:36.322 INFO:teuthology.orchestra.run.vm09.stdout:(40/137): mailcap-2.1.49-5.el9.noarch.rpm 1.1 MB/s | 33 kB 00:00 2026-03-10T09:11:36.338 INFO:teuthology.orchestra.run.vm09.stdout:(41/137): libgfortran-11.5.0-14.el9.x86_64.rpm 7.4 MB/s | 794 kB 00:00 2026-03-10T09:11:36.361 INFO:teuthology.orchestra.run.vm09.stdout:(42/137): pciutils-3.7.0-7.el9.x86_64.rpm 2.3 MB/s | 93 kB 00:00 2026-03-10T09:11:36.382 INFO:teuthology.orchestra.run.vm09.stdout:(43/137): python3-cffi-1.14.5-5.el9.x86_64.rpm 5.7 MB/s | 253 kB 00:00 2026-03-10T09:11:36.425 INFO:teuthology.orchestra.run.vm09.stdout:(44/137): python3-ply-3.11-14.el9.noarch.rpm 2.4 MB/s | 106 kB 00:00 2026-03-10T09:11:36.458 INFO:teuthology.orchestra.run.vm09.stdout:(45/137): python3-cryptography-36.0.1-5.el9.x86 13 MB/s | 1.2 MB 00:00 2026-03-10T09:11:36.473 INFO:teuthology.orchestra.run.vm09.stdout:(46/137): python3-pycparser-2.20-6.el9.noarch.r 2.8 MB/s | 135 kB 00:00 2026-03-10T09:11:36.489 INFO:teuthology.orchestra.run.vm09.stdout:(47/137): python3-pyparsing-2.4.7-9.el9.noarch. 4.9 MB/s | 150 kB 00:00 2026-03-10T09:11:36.523 INFO:teuthology.orchestra.run.vm09.stdout:(48/137): ceph-mgr-diskprediction-local-19.2.3- 4.0 MB/s | 7.4 MB 00:01 2026-03-10T09:11:36.524 INFO:teuthology.orchestra.run.vm09.stdout:(49/137): python3-requests-2.25.1-10.el9.noarch 2.4 MB/s | 126 kB 00:00 2026-03-10T09:11:36.525 INFO:teuthology.orchestra.run.vm09.stdout:(50/137): python3-urllib3-1.26.5-7.el9.noarch.r 5.8 MB/s | 218 kB 00:00 2026-03-10T09:11:36.577 INFO:teuthology.orchestra.run.vm09.stdout:(51/137): zip-3.0-35.el9.x86_64.rpm 5.0 MB/s | 266 kB 00:00 2026-03-10T09:11:36.666 INFO:teuthology.orchestra.run.vm09.stdout:(52/137): unzip-6.0-59.el9.x86_64.rpm 1.2 MB/s | 182 kB 00:00 2026-03-10T09:11:36.856 INFO:teuthology.orchestra.run.vm09.stdout:(53/137): flexiblas-3.0.4-9.el9.x86_64.rpm 106 kB/s | 30 kB 00:00 2026-03-10T09:11:36.911 INFO:teuthology.orchestra.run.vm09.stdout:(54/137): boost-program-options-1.75.0-13.el9.x 270 kB/s | 104 kB 00:00 2026-03-10T09:11:36.990 INFO:teuthology.orchestra.run.vm09.stdout:(55/137): flexiblas-openblas-openmp-3.0.4-9.el9 111 kB/s | 15 kB 00:00 2026-03-10T09:11:37.106 INFO:teuthology.orchestra.run.vm09.stdout:(56/137): libnbd-1.20.3-4.el9.x86_64.rpm 843 kB/s | 164 kB 00:00 2026-03-10T09:11:37.201 INFO:teuthology.orchestra.run.vm09.stdout:(57/137): librabbitmq-0.11.0-7.el9.x86_64.rpm 474 kB/s | 45 kB 00:00 2026-03-10T09:11:37.327 INFO:teuthology.orchestra.run.vm09.stdout:(58/137): libpmemobj-1.12.1-1.el9.x86_64.rpm 475 kB/s | 160 kB 00:00 2026-03-10T09:11:37.450 INFO:teuthology.orchestra.run.vm09.stdout:(59/137): librdkafka-1.6.1-102.el9.x86_64.rpm 2.6 MB/s | 662 kB 00:00 2026-03-10T09:11:37.475 INFO:teuthology.orchestra.run.vm09.stdout:(60/137): flexiblas-netlib-3.0.4-9.el9.x86_64.r 3.7 MB/s | 3.0 MB 00:00 2026-03-10T09:11:37.508 INFO:teuthology.orchestra.run.vm09.stdout:(61/137): libstoragemgmt-1.10.1-1.el9.x86_64.rp 1.3 MB/s | 246 kB 00:00 2026-03-10T09:11:37.616 INFO:teuthology.orchestra.run.vm09.stdout:(62/137): lttng-ust-2.12.0-6.el9.x86_64.rpm 2.0 MB/s | 292 kB 00:00 2026-03-10T09:11:37.617 INFO:teuthology.orchestra.run.vm09.stdout:(63/137): libxslt-1.1.34-12.el9.x86_64.rpm 1.4 MB/s | 233 kB 00:00 2026-03-10T09:11:37.642 INFO:teuthology.orchestra.run.vm09.stdout:(64/137): lua-5.4.4-4.el9.x86_64.rpm 1.4 MB/s | 188 kB 00:00 2026-03-10T09:11:37.726 INFO:teuthology.orchestra.run.vm09.stdout:(65/137): openblas-0.3.29-1.el9.x86_64.rpm 383 kB/s | 42 kB 00:00 2026-03-10T09:11:37.935 INFO:teuthology.orchestra.run.vm09.stdout:(66/137): protobuf-3.14.0-17.el9.x86_64.rpm 3.4 MB/s | 1.0 MB 00:00 2026-03-10T09:11:37.982 INFO:teuthology.orchestra.run.vm09.stdout:(67/137): openblas-openmp-0.3.29-1.el9.x86_64.r 15 MB/s | 5.3 MB 00:00 2026-03-10T09:11:38.052 INFO:teuthology.orchestra.run.vm09.stdout:(68/137): python3-devel-3.9.25-3.el9.x86_64.rpm 2.1 MB/s | 244 kB 00:00 2026-03-10T09:11:38.136 INFO:teuthology.orchestra.run.vm09.stdout:(69/137): python3-jinja2-2.11.3-8.el9.noarch.rp 1.6 MB/s | 249 kB 00:00 2026-03-10T09:11:38.162 INFO:teuthology.orchestra.run.vm09.stdout:(70/137): python3-jmespath-1.0.1-1.el9.noarch.r 431 kB/s | 48 kB 00:00 2026-03-10T09:11:38.178 INFO:teuthology.orchestra.run.vm09.stdout:(71/137): python3-babel-2.9.1-2.el9.noarch.rpm 13 MB/s | 6.0 MB 00:00 2026-03-10T09:11:38.234 INFO:teuthology.orchestra.run.vm09.stdout:(72/137): python3-libstoragemgmt-1.10.1-1.el9.x 1.8 MB/s | 177 kB 00:00 2026-03-10T09:11:38.280 INFO:teuthology.orchestra.run.vm09.stdout:(73/137): python3-mako-1.1.4-6.el9.noarch.rpm 1.4 MB/s | 172 kB 00:00 2026-03-10T09:11:38.290 INFO:teuthology.orchestra.run.vm09.stdout:(74/137): python3-markupsafe-1.1.1-12.el9.x86_6 312 kB/s | 35 kB 00:00 2026-03-10T09:11:38.406 INFO:teuthology.orchestra.run.vm09.stdout:(75/137): python3-packaging-20.9-5.el9.noarch.r 664 kB/s | 77 kB 00:00 2026-03-10T09:11:38.417 INFO:teuthology.orchestra.run.vm09.stdout:(76/137): python3-numpy-f2py-1.23.5-2.el9.x86_6 3.2 MB/s | 442 kB 00:00 2026-03-10T09:11:38.529 INFO:teuthology.orchestra.run.vm09.stdout:(77/137): python3-protobuf-3.14.0-17.el9.noarch 2.1 MB/s | 267 kB 00:00 2026-03-10T09:11:38.557 INFO:teuthology.orchestra.run.vm09.stdout:(78/137): python3-pyasn1-0.4.8-7.el9.noarch.rpm 1.1 MB/s | 157 kB 00:00 2026-03-10T09:11:38.659 INFO:teuthology.orchestra.run.vm09.stdout:(79/137): python3-requests-oauthlib-1.3.0-12.el 528 kB/s | 54 kB 00:00 2026-03-10T09:11:38.667 INFO:teuthology.orchestra.run.vm09.stdout:(80/137): python3-pyasn1-modules-0.4.8-7.el9.no 2.0 MB/s | 277 kB 00:00 2026-03-10T09:11:38.783 INFO:teuthology.orchestra.run.vm09.stdout:(81/137): python3-numpy-1.23.5-2.el9.x86_64.rpm 11 MB/s | 6.1 MB 00:00 2026-03-10T09:11:38.803 INFO:teuthology.orchestra.run.vm09.stdout:(82/137): python3-toml-0.10.2-6.el9.noarch.rpm 307 kB/s | 42 kB 00:00 2026-03-10T09:11:38.892 INFO:teuthology.orchestra.run.vm09.stdout:(83/137): qatlib-service-25.08.0-2.el9.x86_64.r 417 kB/s | 37 kB 00:00 2026-03-10T09:11:38.902 INFO:teuthology.orchestra.run.vm09.stdout:(84/137): qatlib-25.08.0-2.el9.x86_64.rpm 2.0 MB/s | 240 kB 00:00 2026-03-10T09:11:38.990 INFO:teuthology.orchestra.run.vm09.stdout:(85/137): qatzip-libs-1.3.1-1.el9.x86_64.rpm 678 kB/s | 66 kB 00:00 2026-03-10T09:11:39.020 INFO:teuthology.orchestra.run.vm09.stdout:(86/137): socat-1.7.4.1-8.el9.x86_64.rpm 2.5 MB/s | 303 kB 00:00 2026-03-10T09:11:39.125 INFO:teuthology.orchestra.run.vm09.stdout:(87/137): xmlstarlet-1.6.1-20.el9.x86_64.rpm 471 kB/s | 64 kB 00:00 2026-03-10T09:11:39.155 INFO:teuthology.orchestra.run.vm09.stdout:(88/137): lua-devel-5.4.4-4.el9.x86_64.rpm 166 kB/s | 22 kB 00:00 2026-03-10T09:11:39.356 INFO:teuthology.orchestra.run.vm09.stdout:(89/137): protobuf-compiler-3.14.0-17.el9.x86_6 3.6 MB/s | 862 kB 00:00 2026-03-10T09:11:39.394 INFO:teuthology.orchestra.run.vm09.stdout:(90/137): abseil-cpp-20211102.0-4.el9.x86_64.rp 2.3 MB/s | 551 kB 00:00 2026-03-10T09:11:39.396 INFO:teuthology.orchestra.run.vm09.stdout:(91/137): gperftools-libs-2.9.1-3.el9.x86_64.rp 7.5 MB/s | 308 kB 00:00 2026-03-10T09:11:39.443 INFO:teuthology.orchestra.run.vm09.stdout:(92/137): grpc-data-1.46.7-10.el9.noarch.rpm 397 kB/s | 19 kB 00:00 2026-03-10T09:11:39.586 INFO:teuthology.orchestra.run.vm09.stdout:(93/137): python3-scipy-1.9.3-2.el9.x86_64.rpm 21 MB/s | 19 MB 00:00 2026-03-10T09:11:39.587 INFO:teuthology.orchestra.run.vm09.stdout:(94/137): libarrow-doc-9.0.0-15.el9.noarch.rpm 172 kB/s | 25 kB 00:00 2026-03-10T09:11:39.600 INFO:teuthology.orchestra.run.vm09.stdout:(95/137): libarrow-9.0.0-15.el9.x86_64.rpm 22 MB/s | 4.4 MB 00:00 2026-03-10T09:11:39.617 INFO:teuthology.orchestra.run.vm09.stdout:(96/137): liboath-2.6.12-1.el9.x86_64.rpm 1.6 MB/s | 49 kB 00:00 2026-03-10T09:11:39.618 INFO:teuthology.orchestra.run.vm09.stdout:(97/137): libunwind-1.6.2-1.el9.x86_64.rpm 2.2 MB/s | 67 kB 00:00 2026-03-10T09:11:39.632 INFO:teuthology.orchestra.run.vm09.stdout:(98/137): luarocks-3.9.2-5.el9.noarch.rpm 4.7 MB/s | 151 kB 00:00 2026-03-10T09:11:39.656 INFO:teuthology.orchestra.run.vm09.stdout:(99/137): parquet-libs-9.0.0-15.el9.x86_64.rpm 21 MB/s | 838 kB 00:00 2026-03-10T09:11:39.660 INFO:teuthology.orchestra.run.vm09.stdout:(100/137): python3-asyncssh-2.13.2-5.el9.noarch 13 MB/s | 548 kB 00:00 2026-03-10T09:11:39.663 INFO:teuthology.orchestra.run.vm09.stdout:(101/137): python3-autocommand-2.2.2-8.el9.noar 965 kB/s | 29 kB 00:00 2026-03-10T09:11:39.691 INFO:teuthology.orchestra.run.vm09.stdout:(102/137): python3-backports-tarfile-1.2.0-1.el 1.7 MB/s | 60 kB 00:00 2026-03-10T09:11:39.692 INFO:teuthology.orchestra.run.vm09.stdout:(103/137): python3-bcrypt-3.2.2-1.el9.x86_64.rp 1.3 MB/s | 43 kB 00:00 2026-03-10T09:11:39.693 INFO:teuthology.orchestra.run.vm09.stdout:(104/137): python3-cachetools-4.2.4-1.el9.noarc 1.0 MB/s | 32 kB 00:00 2026-03-10T09:11:39.721 INFO:teuthology.orchestra.run.vm09.stdout:(105/137): python3-certifi-2023.05.07-4.el9.noa 476 kB/s | 14 kB 00:00 2026-03-10T09:11:39.724 INFO:teuthology.orchestra.run.vm09.stdout:(106/137): python3-cheroot-10.0.1-4.el9.noarch. 5.3 MB/s | 173 kB 00:00 2026-03-10T09:11:39.728 INFO:teuthology.orchestra.run.vm09.stdout:(107/137): python3-cherrypy-18.6.1-2.el9.noarch 9.9 MB/s | 358 kB 00:00 2026-03-10T09:11:39.753 INFO:teuthology.orchestra.run.vm09.stdout:(108/137): python3-google-auth-2.45.0-1.el9.noa 7.7 MB/s | 254 kB 00:00 2026-03-10T09:11:39.790 INFO:teuthology.orchestra.run.vm09.stdout:(109/137): python3-grpcio-1.46.7-10.el9.x86_64. 31 MB/s | 2.0 MB 00:00 2026-03-10T09:11:39.791 INFO:teuthology.orchestra.run.vm09.stdout:(110/137): python3-grpcio-tools-1.46.7-10.el9.x 2.2 MB/s | 144 kB 00:00 2026-03-10T09:11:39.792 INFO:teuthology.orchestra.run.vm09.stdout:(111/137): python3-jaraco-8.2.1-3.el9.noarch.rp 279 kB/s | 11 kB 00:00 2026-03-10T09:11:39.820 INFO:teuthology.orchestra.run.vm09.stdout:(112/137): python3-jaraco-classes-3.2.1-5.el9.n 588 kB/s | 18 kB 00:00 2026-03-10T09:11:39.822 INFO:teuthology.orchestra.run.vm09.stdout:(113/137): python3-jaraco-collections-3.0.0-8.e 778 kB/s | 23 kB 00:00 2026-03-10T09:11:39.822 INFO:teuthology.orchestra.run.vm09.stdout:(114/137): python3-jaraco-context-6.0.1-3.el9.n 647 kB/s | 20 kB 00:00 2026-03-10T09:11:39.850 INFO:teuthology.orchestra.run.vm09.stdout:(115/137): python3-jaraco-functools-3.5.0-2.el9 650 kB/s | 19 kB 00:00 2026-03-10T09:11:39.852 INFO:teuthology.orchestra.run.vm09.stdout:(116/137): python3-jaraco-text-4.0.0-2.el9.noar 885 kB/s | 26 kB 00:00 2026-03-10T09:11:39.870 INFO:teuthology.orchestra.run.vm09.stdout:(117/137): python3-kubernetes-26.1.0-3.el9.noar 21 MB/s | 1.0 MB 00:00 2026-03-10T09:11:39.882 INFO:teuthology.orchestra.run.vm09.stdout:(118/137): python3-logutils-0.3.5-21.el9.noarch 1.4 MB/s | 46 kB 00:00 2026-03-10T09:11:39.884 INFO:teuthology.orchestra.run.vm09.stdout:(119/137): python3-more-itertools-8.12.0-2.el9. 2.4 MB/s | 79 kB 00:00 2026-03-10T09:11:39.901 INFO:teuthology.orchestra.run.vm09.stdout:(120/137): python3-natsort-7.1.1-5.el9.noarch.r 1.8 MB/s | 58 kB 00:00 2026-03-10T09:11:39.916 INFO:teuthology.orchestra.run.vm09.stdout:(121/137): python3-pecan-1.4.2-3.el9.noarch.rpm 7.8 MB/s | 272 kB 00:00 2026-03-10T09:11:39.917 INFO:teuthology.orchestra.run.vm09.stdout:(122/137): python3-portend-3.1.0-2.el9.noarch.r 503 kB/s | 16 kB 00:00 2026-03-10T09:11:39.933 INFO:teuthology.orchestra.run.vm09.stdout:(123/137): python3-pyOpenSSL-21.0.0-1.el9.noarc 2.8 MB/s | 90 kB 00:00 2026-03-10T09:11:39.947 INFO:teuthology.orchestra.run.vm09.stdout:(124/137): python3-repoze-lru-0.7-16.el9.noarch 1.0 MB/s | 31 kB 00:00 2026-03-10T09:11:39.950 INFO:teuthology.orchestra.run.vm09.stdout:(125/137): python3-routes-2.5.1-5.el9.noarch.rp 5.6 MB/s | 188 kB 00:00 2026-03-10T09:11:39.965 INFO:teuthology.orchestra.run.vm09.stdout:(126/137): python3-rsa-4.9-2.el9.noarch.rpm 1.8 MB/s | 59 kB 00:00 2026-03-10T09:11:39.978 INFO:teuthology.orchestra.run.vm09.stdout:(127/137): python3-tempora-5.0.0-2.el9.noarch.r 1.1 MB/s | 36 kB 00:00 2026-03-10T09:11:39.981 INFO:teuthology.orchestra.run.vm09.stdout:(128/137): python3-typing-extensions-4.15.0-1.e 2.7 MB/s | 86 kB 00:00 2026-03-10T09:11:39.998 INFO:teuthology.orchestra.run.vm09.stdout:(129/137): python3-webob-1.8.8-2.el9.noarch.rpm 6.7 MB/s | 230 kB 00:00 2026-03-10T09:11:40.009 INFO:teuthology.orchestra.run.vm09.stdout:(130/137): python3-websocket-client-1.2.3-2.el9 2.9 MB/s | 90 kB 00:00 2026-03-10T09:11:40.017 INFO:teuthology.orchestra.run.vm09.stdout:(131/137): python3-werkzeug-2.0.3-3.el9.1.noarc 12 MB/s | 427 kB 00:00 2026-03-10T09:11:40.029 INFO:teuthology.orchestra.run.vm09.stdout:(132/137): python3-xmltodict-0.12.0-15.el9.noar 736 kB/s | 22 kB 00:00 2026-03-10T09:11:40.040 INFO:teuthology.orchestra.run.vm09.stdout:(133/137): python3-zc-lockfile-2.0-10.el9.noarc 651 kB/s | 20 kB 00:00 2026-03-10T09:11:40.050 INFO:teuthology.orchestra.run.vm09.stdout:(134/137): re2-20211101-20.el9.x86_64.rpm 5.6 MB/s | 191 kB 00:00 2026-03-10T09:11:40.081 INFO:teuthology.orchestra.run.vm09.stdout:(135/137): thrift-0.15.0-4.el9.x86_64.rpm 30 MB/s | 1.6 MB 00:00 2026-03-10T09:11:41.088 INFO:teuthology.orchestra.run.vm09.stdout:(136/137): librados2-19.2.3-678.ge911bdeb.el9.x 3.3 MB/s | 3.4 MB 00:01 2026-03-10T09:11:41.222 INFO:teuthology.orchestra.run.vm09.stdout:(137/137): librbd1-19.2.3-678.ge911bdeb.el9.x86 2.7 MB/s | 3.2 MB 00:01 2026-03-10T09:11:41.224 INFO:teuthology.orchestra.run.vm09.stdout:-------------------------------------------------------------------------------- 2026-03-10T09:11:41.224 INFO:teuthology.orchestra.run.vm09.stdout:Total 10 MB/s | 210 MB 00:20 2026-03-10T09:11:41.695 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-10T09:11:41.741 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-10T09:11:41.741 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-10T09:11:42.551 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-10T09:11:42.552 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-10T09:11:43.431 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-10T09:11:43.444 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-more-itertools-8.12.0-2.el9.noarch 1/139 2026-03-10T09:11:43.456 INFO:teuthology.orchestra.run.vm09.stdout: Installing : thrift-0.15.0-4.el9.x86_64 2/139 2026-03-10T09:11:43.618 INFO:teuthology.orchestra.run.vm09.stdout: Installing : lttng-ust-2.12.0-6.el9.x86_64 3/139 2026-03-10T09:11:43.620 INFO:teuthology.orchestra.run.vm09.stdout: Upgrading : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/139 2026-03-10T09:11:43.678 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/139 2026-03-10T09:11:43.679 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/139 2026-03-10T09:11:43.707 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/139 2026-03-10T09:11:43.717 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 6/139 2026-03-10T09:11:43.721 INFO:teuthology.orchestra.run.vm09.stdout: Installing : librdkafka-1.6.1-102.el9.x86_64 7/139 2026-03-10T09:11:43.723 INFO:teuthology.orchestra.run.vm09.stdout: Installing : librabbitmq-0.11.0-7.el9.x86_64 8/139 2026-03-10T09:11:43.729 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jaraco-8.2.1-3.el9.noarch 9/139 2026-03-10T09:11:43.738 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libnbd-1.20.3-4.el9.x86_64 10/139 2026-03-10T09:11:43.740 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 11/139 2026-03-10T09:11:43.774 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 11/139 2026-03-10T09:11:43.776 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 12/139 2026-03-10T09:11:43.790 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 12/139 2026-03-10T09:11:43.822 INFO:teuthology.orchestra.run.vm09.stdout: Installing : re2-1:20211101-20.el9.x86_64 13/139 2026-03-10T09:11:43.859 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libarrow-9.0.0-15.el9.x86_64 14/139 2026-03-10T09:11:43.865 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-werkzeug-2.0.3-3.el9.1.noarch 15/139 2026-03-10T09:11:43.889 INFO:teuthology.orchestra.run.vm09.stdout: Installing : liboath-2.6.12-1.el9.x86_64 16/139 2026-03-10T09:11:43.897 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-pyasn1-0.4.8-7.el9.noarch 17/139 2026-03-10T09:11:43.908 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-markupsafe-1.1.1-12.el9.x86_64 18/139 2026-03-10T09:11:43.914 INFO:teuthology.orchestra.run.vm09.stdout: Installing : protobuf-3.14.0-17.el9.x86_64 19/139 2026-03-10T09:11:43.918 INFO:teuthology.orchestra.run.vm09.stdout: Installing : lua-5.4.4-4.el9.x86_64 20/139 2026-03-10T09:11:43.924 INFO:teuthology.orchestra.run.vm09.stdout: Installing : flexiblas-3.0.4-9.el9.x86_64 21/139 2026-03-10T09:11:43.953 INFO:teuthology.orchestra.run.vm09.stdout: Installing : unzip-6.0-59.el9.x86_64 22/139 2026-03-10T09:11:43.978 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-urllib3-1.26.5-7.el9.noarch 23/139 2026-03-10T09:11:43.982 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-requests-2.25.1-10.el9.noarch 24/139 2026-03-10T09:11:43.989 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libquadmath-11.5.0-14.el9.x86_64 25/139 2026-03-10T09:11:43.992 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libgfortran-11.5.0-14.el9.x86_64 26/139 2026-03-10T09:11:44.022 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ledmon-libs-1.1.0-3.el9.x86_64 27/139 2026-03-10T09:11:44.028 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 28/139 2026-03-10T09:11:44.038 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 29/139 2026-03-10T09:11:44.051 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 30/139 2026-03-10T09:11:44.059 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-requests-oauthlib-1.3.0-12.el9.noarch 31/139 2026-03-10T09:11:44.087 INFO:teuthology.orchestra.run.vm09.stdout: Installing : zip-3.0-35.el9.x86_64 32/139 2026-03-10T09:11:44.092 INFO:teuthology.orchestra.run.vm09.stdout: Installing : luarocks-3.9.2-5.el9.noarch 33/139 2026-03-10T09:11:44.100 INFO:teuthology.orchestra.run.vm09.stdout: Installing : lua-devel-5.4.4-4.el9.x86_64 34/139 2026-03-10T09:11:44.127 INFO:teuthology.orchestra.run.vm09.stdout: Installing : protobuf-compiler-3.14.0-17.el9.x86_64 35/139 2026-03-10T09:11:44.187 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-mako-1.1.4-6.el9.noarch 36/139 2026-03-10T09:11:44.205 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-pyasn1-modules-0.4.8-7.el9.noarch 37/139 2026-03-10T09:11:44.213 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-rsa-4.9-2.el9.noarch 38/139 2026-03-10T09:11:44.222 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jaraco-classes-3.2.1-5.el9.noarch 39/139 2026-03-10T09:11:44.229 INFO:teuthology.orchestra.run.vm09.stdout: Installing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 40/139 2026-03-10T09:11:44.233 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-zc-lockfile-2.0-10.el9.noarch 41/139 2026-03-10T09:11:44.249 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-xmltodict-0.12.0-15.el9.noarch 42/139 2026-03-10T09:11:44.274 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-websocket-client-1.2.3-2.el9.noarch 43/139 2026-03-10T09:11:44.281 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-webob-1.8.8-2.el9.noarch 44/139 2026-03-10T09:11:44.287 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-typing-extensions-4.15.0-1.el9.noarch 45/139 2026-03-10T09:11:44.300 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-repoze-lru-0.7-16.el9.noarch 46/139 2026-03-10T09:11:44.312 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-routes-2.5.1-5.el9.noarch 47/139 2026-03-10T09:11:44.324 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-natsort-7.1.1-5.el9.noarch 48/139 2026-03-10T09:11:44.387 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-logutils-0.3.5-21.el9.noarch 49/139 2026-03-10T09:11:44.397 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-pecan-1.4.2-3.el9.noarch 50/139 2026-03-10T09:11:44.408 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-certifi-2023.05.07-4.el9.noarch 51/139 2026-03-10T09:11:44.455 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-cachetools-4.2.4-1.el9.noarch 52/139 2026-03-10T09:11:44.827 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-google-auth-1:2.45.0-1.el9.noarch 53/139 2026-03-10T09:11:44.843 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-kubernetes-1:26.1.0-3.el9.noarch 54/139 2026-03-10T09:11:44.848 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-backports-tarfile-1.2.0-1.el9.noarch 55/139 2026-03-10T09:11:44.855 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jaraco-context-6.0.1-3.el9.noarch 56/139 2026-03-10T09:11:44.860 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-autocommand-2.2.2-8.el9.noarch 57/139 2026-03-10T09:11:44.867 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libunwind-1.6.2-1.el9.x86_64 58/139 2026-03-10T09:11:44.870 INFO:teuthology.orchestra.run.vm09.stdout: Installing : gperftools-libs-2.9.1-3.el9.x86_64 59/139 2026-03-10T09:11:44.873 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libarrow-doc-9.0.0-15.el9.noarch 60/139 2026-03-10T09:11:44.902 INFO:teuthology.orchestra.run.vm09.stdout: Installing : grpc-data-1.46.7-10.el9.noarch 61/139 2026-03-10T09:11:44.951 INFO:teuthology.orchestra.run.vm09.stdout: Installing : abseil-cpp-20211102.0-4.el9.x86_64 62/139 2026-03-10T09:11:44.964 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-grpcio-1.46.7-10.el9.x86_64 63/139 2026-03-10T09:11:44.972 INFO:teuthology.orchestra.run.vm09.stdout: Installing : socat-1.7.4.1-8.el9.x86_64 64/139 2026-03-10T09:11:44.977 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-toml-0.10.2-6.el9.noarch 65/139 2026-03-10T09:11:44.985 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jaraco-functools-3.5.0-2.el9.noarch 66/139 2026-03-10T09:11:44.991 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jaraco-text-4.0.0-2.el9.noarch 67/139 2026-03-10T09:11:45.000 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jaraco-collections-3.0.0-8.el9.noarch 68/139 2026-03-10T09:11:45.005 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-tempora-5.0.0-2.el9.noarch 69/139 2026-03-10T09:11:45.037 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-portend-3.1.0-2.el9.noarch 70/139 2026-03-10T09:11:45.050 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-protobuf-3.14.0-17.el9.noarch 71/139 2026-03-10T09:11:45.091 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-grpcio-tools-1.46.7-10.el9.x86_64 72/139 2026-03-10T09:11:45.346 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-devel-3.9.25-3.el9.x86_64 73/139 2026-03-10T09:11:45.378 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-babel-2.9.1-2.el9.noarch 74/139 2026-03-10T09:11:45.384 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jinja2-2.11.3-8.el9.noarch 75/139 2026-03-10T09:11:45.444 INFO:teuthology.orchestra.run.vm09.stdout: Installing : openblas-0.3.29-1.el9.x86_64 76/139 2026-03-10T09:11:45.446 INFO:teuthology.orchestra.run.vm09.stdout: Installing : openblas-openmp-0.3.29-1.el9.x86_64 77/139 2026-03-10T09:11:45.469 INFO:teuthology.orchestra.run.vm09.stdout: Installing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 78/139 2026-03-10T09:11:45.847 INFO:teuthology.orchestra.run.vm09.stdout: Installing : flexiblas-netlib-3.0.4-9.el9.x86_64 79/139 2026-03-10T09:11:45.936 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-numpy-1:1.23.5-2.el9.x86_64 80/139 2026-03-10T09:11:46.712 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 81/139 2026-03-10T09:11:46.740 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-scipy-1.9.3-2.el9.x86_64 82/139 2026-03-10T09:11:46.746 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libxslt-1.1.34-12.el9.x86_64 83/139 2026-03-10T09:11:46.751 INFO:teuthology.orchestra.run.vm09.stdout: Installing : xmlstarlet-1.6.1-20.el9.x86_64 84/139 2026-03-10T09:11:46.903 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libpmemobj-1.12.1-1.el9.x86_64 85/139 2026-03-10T09:11:46.906 INFO:teuthology.orchestra.run.vm09.stdout: Upgrading : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 86/139 2026-03-10T09:11:46.938 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 86/139 2026-03-10T09:11:46.941 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 87/139 2026-03-10T09:11:46.950 INFO:teuthology.orchestra.run.vm09.stdout: Installing : boost-program-options-1.75.0-13.el9.x86_64 88/139 2026-03-10T09:11:47.197 INFO:teuthology.orchestra.run.vm09.stdout: Installing : parquet-libs-9.0.0-15.el9.x86_64 89/139 2026-03-10T09:11:47.200 INFO:teuthology.orchestra.run.vm09.stdout: Installing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 90/139 2026-03-10T09:11:47.219 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 90/139 2026-03-10T09:11:47.221 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 91/139 2026-03-10T09:11:48.330 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 92/139 2026-03-10T09:11:48.336 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 92/139 2026-03-10T09:11:48.357 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 92/139 2026-03-10T09:11:48.369 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-pyparsing-2.4.7-9.el9.noarch 93/139 2026-03-10T09:11:48.379 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-packaging-20.9-5.el9.noarch 94/139 2026-03-10T09:11:48.398 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-ply-3.11-14.el9.noarch 95/139 2026-03-10T09:11:48.419 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-pycparser-2.20-6.el9.noarch 96/139 2026-03-10T09:11:48.513 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-cffi-1.14.5-5.el9.x86_64 97/139 2026-03-10T09:11:48.529 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-cryptography-36.0.1-5.el9.x86_64 98/139 2026-03-10T09:11:48.560 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-pyOpenSSL-21.0.0-1.el9.noarch 99/139 2026-03-10T09:11:48.598 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-cheroot-10.0.1-4.el9.noarch 100/139 2026-03-10T09:11:48.665 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-cherrypy-18.6.1-2.el9.noarch 101/139 2026-03-10T09:11:48.676 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-asyncssh-2.13.2-5.el9.noarch 102/139 2026-03-10T09:11:48.682 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-bcrypt-3.2.2-1.el9.x86_64 103/139 2026-03-10T09:11:48.689 INFO:teuthology.orchestra.run.vm09.stdout: Installing : pciutils-3.7.0-7.el9.x86_64 104/139 2026-03-10T09:11:48.694 INFO:teuthology.orchestra.run.vm09.stdout: Installing : qatlib-25.08.0-2.el9.x86_64 105/139 2026-03-10T09:11:48.695 INFO:teuthology.orchestra.run.vm09.stdout: Installing : qatlib-service-25.08.0-2.el9.x86_64 106/139 2026-03-10T09:11:48.715 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 106/139 2026-03-10T09:11:49.016 INFO:teuthology.orchestra.run.vm09.stdout: Installing : qatzip-libs-1.3.1-1.el9.x86_64 107/139 2026-03-10T09:11:49.023 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 108/139 2026-03-10T09:11:49.073 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 108/139 2026-03-10T09:11:49.073 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /usr/lib/systemd/system/ceph.target. 2026-03-10T09:11:49.073 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /usr/lib/systemd/system/ceph-crash.service. 2026-03-10T09:11:49.073 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:11:49.079 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 109/139 2026-03-10T09:11:55.553 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 109/139 2026-03-10T09:11:55.553 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /sys 2026-03-10T09:11:55.553 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /proc 2026-03-10T09:11:55.553 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /mnt 2026-03-10T09:11:55.553 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /var/tmp 2026-03-10T09:11:55.553 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /home 2026-03-10T09:11:55.553 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /root 2026-03-10T09:11:55.554 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /tmp 2026-03-10T09:11:55.554 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:11:55.672 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 110/139 2026-03-10T09:11:55.695 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 110/139 2026-03-10T09:11:55.695 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:11:55.695 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-10T09:11:55.695 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-10T09:11:55.695 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-10T09:11:55.695 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:11:55.919 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 111/139 2026-03-10T09:11:55.940 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 111/139 2026-03-10T09:11:55.940 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:11:55.940 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-10T09:11:55.940 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-10T09:11:55.940 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-10T09:11:55.940 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:11:55.949 INFO:teuthology.orchestra.run.vm09.stdout: Installing : mailcap-2.1.49-5.el9.noarch 112/139 2026-03-10T09:11:55.952 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libconfig-1.7.2-9.el9.x86_64 113/139 2026-03-10T09:11:55.970 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 114/139 2026-03-10T09:11:55.971 INFO:teuthology.orchestra.run.vm09.stdout:Creating group 'qat' with GID 994. 2026-03-10T09:11:55.971 INFO:teuthology.orchestra.run.vm09.stdout:Creating group 'libstoragemgmt' with GID 993. 2026-03-10T09:11:55.971 INFO:teuthology.orchestra.run.vm09.stdout:Creating user 'libstoragemgmt' (daemon account for libstoragemgmt) with UID 993 and GID 993. 2026-03-10T09:11:55.971 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:11:55.981 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libstoragemgmt-1.10.1-1.el9.x86_64 114/139 2026-03-10T09:11:56.006 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 114/139 2026-03-10T09:11:56.006 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/libstoragemgmt.service → /usr/lib/systemd/system/libstoragemgmt.service. 2026-03-10T09:11:56.006 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:11:56.047 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 115/139 2026-03-10T09:11:56.123 INFO:teuthology.orchestra.run.vm09.stdout: Installing : cryptsetup-2.8.1-3.el9.x86_64 116/139 2026-03-10T09:11:56.128 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 117/139 2026-03-10T09:11:56.141 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 117/139 2026-03-10T09:11:56.141 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:11:56.141 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-10T09:11:56.141 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:11:56.927 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 118/139 2026-03-10T09:11:56.953 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 118/139 2026-03-10T09:11:56.953 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:11:56.954 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-10T09:11:56.954 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-10T09:11:56.954 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-10T09:11:56.954 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:11:57.015 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 119/139 2026-03-10T09:11:57.018 INFO:teuthology.orchestra.run.vm09.stdout: Installing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 119/139 2026-03-10T09:11:57.025 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 120/139 2026-03-10T09:11:57.050 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 121/139 2026-03-10T09:11:57.053 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 122/139 2026-03-10T09:11:57.586 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 122/139 2026-03-10T09:11:57.593 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 123/139 2026-03-10T09:11:58.100 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 123/139 2026-03-10T09:11:58.103 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 124/139 2026-03-10T09:11:58.165 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 124/139 2026-03-10T09:11:58.221 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 125/139 2026-03-10T09:11:58.258 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 126/139 2026-03-10T09:11:58.278 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 126/139 2026-03-10T09:11:58.278 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:11:58.278 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-10T09:11:58.278 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-10T09:11:58.278 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-10T09:11:58.278 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:11:58.293 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 127/139 2026-03-10T09:11:58.302 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 127/139 2026-03-10T09:11:58.792 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 128/139 2026-03-10T09:11:58.795 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 129/139 2026-03-10T09:11:58.815 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 129/139 2026-03-10T09:11:58.815 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:11:58.815 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-10T09:11:58.815 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-10T09:11:58.815 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-10T09:11:58.815 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:11:58.826 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 130/139 2026-03-10T09:11:58.845 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 130/139 2026-03-10T09:11:58.845 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:11:58.845 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-10T09:11:58.845 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:11:58.996 INFO:teuthology.orchestra.run.vm09.stdout: Installing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 131/139 2026-03-10T09:11:59.016 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 131/139 2026-03-10T09:11:59.016 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:11:59.016 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-10T09:11:59.016 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-10T09:11:59.016 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-10T09:11:59.016 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:12:01.531 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 132/139 2026-03-10T09:12:01.542 INFO:teuthology.orchestra.run.vm09.stdout: Installing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 133/139 2026-03-10T09:12:01.548 INFO:teuthology.orchestra.run.vm09.stdout: Installing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 134/139 2026-03-10T09:12:01.610 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 135/139 2026-03-10T09:12:01.621 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 136/139 2026-03-10T09:12:01.625 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jmespath-1.0.1-1.el9.noarch 137/139 2026-03-10T09:12:01.625 INFO:teuthology.orchestra.run.vm09.stdout: Cleanup : librbd1-2:16.2.4-5.el9.x86_64 138/139 2026-03-10T09:12:01.644 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librbd1-2:16.2.4-5.el9.x86_64 138/139 2026-03-10T09:12:01.644 INFO:teuthology.orchestra.run.vm09.stdout: Cleanup : librados2-2:16.2.4-5.el9.x86_64 139/139 2026-03-10T09:12:03.181 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librados2-2:16.2.4-5.el9.x86_64 139/139 2026-03-10T09:12:03.181 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/139 2026-03-10T09:12:03.182 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/139 2026-03-10T09:12:03.182 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/139 2026-03-10T09:12:03.182 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 4/139 2026-03-10T09:12:03.182 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/139 2026-03-10T09:12:03.182 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 6/139 2026-03-10T09:12:03.182 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 7/139 2026-03-10T09:12:03.182 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/139 2026-03-10T09:12:03.182 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 9/139 2026-03-10T09:12:03.182 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 10/139 2026-03-10T09:12:03.182 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 11/139 2026-03-10T09:12:03.182 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 12/139 2026-03-10T09:12:03.182 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 13/139 2026-03-10T09:12:03.182 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 14/139 2026-03-10T09:12:03.182 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 15/139 2026-03-10T09:12:03.182 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 16/139 2026-03-10T09:12:03.182 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 17/139 2026-03-10T09:12:03.182 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 18/139 2026-03-10T09:12:03.182 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 19/139 2026-03-10T09:12:03.182 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 20/139 2026-03-10T09:12:03.182 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 21/139 2026-03-10T09:12:03.182 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 22/139 2026-03-10T09:12:03.182 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 23/139 2026-03-10T09:12:03.182 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 24/139 2026-03-10T09:12:03.182 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 25/139 2026-03-10T09:12:03.182 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 26/139 2026-03-10T09:12:03.182 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 27/139 2026-03-10T09:12:03.182 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 28/139 2026-03-10T09:12:03.182 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 29/139 2026-03-10T09:12:03.182 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 30/139 2026-03-10T09:12:03.182 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 31/139 2026-03-10T09:12:03.182 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 32/139 2026-03-10T09:12:03.182 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 33/139 2026-03-10T09:12:03.182 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 34/139 2026-03-10T09:12:03.182 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 35/139 2026-03-10T09:12:03.182 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 36/139 2026-03-10T09:12:03.182 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 37/139 2026-03-10T09:12:03.182 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 38/139 2026-03-10T09:12:03.182 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 39/139 2026-03-10T09:12:03.184 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 40/139 2026-03-10T09:12:03.184 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 41/139 2026-03-10T09:12:03.184 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 42/139 2026-03-10T09:12:03.184 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 43/139 2026-03-10T09:12:03.184 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/139 2026-03-10T09:12:03.184 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 45/139 2026-03-10T09:12:03.184 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-ply-3.11-14.el9.noarch 46/139 2026-03-10T09:12:03.184 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 47/139 2026-03-10T09:12:03.184 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.noarch 48/139 2026-03-10T09:12:03.184 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 49/139 2026-03-10T09:12:03.184 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 50/139 2026-03-10T09:12:03.184 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : unzip-6.0-59.el9.x86_64 51/139 2026-03-10T09:12:03.184 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : zip-3.0-35.el9.x86_64 52/139 2026-03-10T09:12:03.184 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 53/139 2026-03-10T09:12:03.184 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 54/139 2026-03-10T09:12:03.184 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 55/139 2026-03-10T09:12:03.184 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 56/139 2026-03-10T09:12:03.184 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 57/139 2026-03-10T09:12:03.184 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 58/139 2026-03-10T09:12:03.184 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 59/139 2026-03-10T09:12:03.184 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 60/139 2026-03-10T09:12:03.184 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 61/139 2026-03-10T09:12:03.184 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 62/139 2026-03-10T09:12:03.184 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 63/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : lua-5.4.4-4.el9.x86_64 64/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 65/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 66/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 67/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 68/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 69/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 70/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jmespath-1.0.1-1.el9.noarch 71/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 72/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 73/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 74/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 75/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 76/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 77/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 78/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 79/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 80/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 81/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 82/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 83/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 84/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 85/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 86/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 87/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 88/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 89/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 90/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 91/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 92/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 93/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 94/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 95/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 96/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 97/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 98/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 99/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 100/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 101/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 102/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 103/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 104/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 105/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 106/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 107/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 108/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 109/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 110/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 111/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 112/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 113/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 114/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 115/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 116/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 117/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 118/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 119/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 120/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 121/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 122/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 123/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 124/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 125/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 126/139 2026-03-10T09:12:03.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 127/139 2026-03-10T09:12:03.186 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 128/139 2026-03-10T09:12:03.186 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 129/139 2026-03-10T09:12:03.186 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 130/139 2026-03-10T09:12:03.186 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 131/139 2026-03-10T09:12:03.186 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-xmltodict-0.12.0-15.el9.noarch 132/139 2026-03-10T09:12:03.186 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 133/139 2026-03-10T09:12:03.186 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : re2-1:20211101-20.el9.x86_64 134/139 2026-03-10T09:12:03.186 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 135/139 2026-03-10T09:12:03.186 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 136/139 2026-03-10T09:12:03.186 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librados2-2:16.2.4-5.el9.x86_64 137/139 2026-03-10T09:12:03.186 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 138/139 2026-03-10T09:12:03.285 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librbd1-2:16.2.4-5.el9.x86_64 139/139 2026-03-10T09:12:03.285 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:12:03.285 INFO:teuthology.orchestra.run.vm09.stdout:Upgraded: 2026-03-10T09:12:03.285 INFO:teuthology.orchestra.run.vm09.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:12:03.285 INFO:teuthology.orchestra.run.vm09.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:12:03.285 INFO:teuthology.orchestra.run.vm09.stdout:Installed: 2026-03-10T09:12:03.285 INFO:teuthology.orchestra.run.vm09.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-10T09:12:03.285 INFO:teuthology.orchestra.run.vm09.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-10T09:12:03.285 INFO:teuthology.orchestra.run.vm09.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:12:03.285 INFO:teuthology.orchestra.run.vm09.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:12:03.285 INFO:teuthology.orchestra.run.vm09.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:12:03.285 INFO:teuthology.orchestra.run.vm09.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:12:03.285 INFO:teuthology.orchestra.run.vm09.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:12:03.285 INFO:teuthology.orchestra.run.vm09.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:12:03.285 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:12:03.285 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:12:03.286 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:12:03.286 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:12:03.286 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:12:03.286 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:12:03.286 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:12:03.286 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:12:03.286 INFO:teuthology.orchestra.run.vm09.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:12:03.286 INFO:teuthology.orchestra.run.vm09.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:12:03.286 INFO:teuthology.orchestra.run.vm09.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:12:03.286 INFO:teuthology.orchestra.run.vm09.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:12:03.286 INFO:teuthology.orchestra.run.vm09.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:12:03.286 INFO:teuthology.orchestra.run.vm09.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:12:03.286 INFO:teuthology.orchestra.run.vm09.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:12:03.286 INFO:teuthology.orchestra.run.vm09.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-10T09:12:03.286 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-10T09:12:03.286 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-10T09:12:03.286 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-10T09:12:03.286 INFO:teuthology.orchestra.run.vm09.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-10T09:12:03.286 INFO:teuthology.orchestra.run.vm09.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-10T09:12:03.286 INFO:teuthology.orchestra.run.vm09.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-10T09:12:03.286 INFO:teuthology.orchestra.run.vm09.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-10T09:12:03.286 INFO:teuthology.orchestra.run.vm09.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-10T09:12:03.286 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:12:03.286 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:12:03.286 INFO:teuthology.orchestra.run.vm09.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:12:03.286 INFO:teuthology.orchestra.run.vm09.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-10T09:12:03.286 INFO:teuthology.orchestra.run.vm09.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-10T09:12:03.286 INFO:teuthology.orchestra.run.vm09.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-10T09:12:03.286 INFO:teuthology.orchestra.run.vm09.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-10T09:12:03.286 INFO:teuthology.orchestra.run.vm09.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-10T09:12:03.286 INFO:teuthology.orchestra.run.vm09.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-10T09:12:03.286 INFO:teuthology.orchestra.run.vm09.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-10T09:12:03.286 INFO:teuthology.orchestra.run.vm09.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:12:03.286 INFO:teuthology.orchestra.run.vm09.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:12:03.286 INFO:teuthology.orchestra.run.vm09.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-10T09:12:03.286 INFO:teuthology.orchestra.run.vm09.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:12:03.286 INFO:teuthology.orchestra.run.vm09.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T09:12:03.286 INFO:teuthology.orchestra.run.vm09.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-10T09:12:03.286 INFO:teuthology.orchestra.run.vm09.stdout: libxslt-1.1.34-12.el9.x86_64 2026-03-10T09:12:03.286 INFO:teuthology.orchestra.run.vm09.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-10T09:12:03.286 INFO:teuthology.orchestra.run.vm09.stdout: lua-5.4.4-4.el9.x86_64 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: mailcap-2.1.49-5.el9.noarch 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-jmespath-1.0.1-1.el9.noarch 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-ply-3.11-14.el9.noarch 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyparsing-2.4.7-9.el9.noarch 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:12:03.287 INFO:teuthology.orchestra.run.vm09.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-10T09:12:03.288 INFO:teuthology.orchestra.run.vm09.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-10T09:12:03.288 INFO:teuthology.orchestra.run.vm09.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-10T09:12:03.288 INFO:teuthology.orchestra.run.vm09.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-10T09:12:03.288 INFO:teuthology.orchestra.run.vm09.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-10T09:12:03.288 INFO:teuthology.orchestra.run.vm09.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-10T09:12:03.288 INFO:teuthology.orchestra.run.vm09.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-10T09:12:03.288 INFO:teuthology.orchestra.run.vm09.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-10T09:12:03.288 INFO:teuthology.orchestra.run.vm09.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-10T09:12:03.288 INFO:teuthology.orchestra.run.vm09.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-10T09:12:03.288 INFO:teuthology.orchestra.run.vm09.stdout: python3-xmltodict-0.12.0-15.el9.noarch 2026-03-10T09:12:03.288 INFO:teuthology.orchestra.run.vm09.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-10T09:12:03.288 INFO:teuthology.orchestra.run.vm09.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-10T09:12:03.288 INFO:teuthology.orchestra.run.vm09.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-10T09:12:03.288 INFO:teuthology.orchestra.run.vm09.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-10T09:12:03.288 INFO:teuthology.orchestra.run.vm09.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:12:03.288 INFO:teuthology.orchestra.run.vm09.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:12:03.288 INFO:teuthology.orchestra.run.vm09.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:12:03.288 INFO:teuthology.orchestra.run.vm09.stdout: re2-1:20211101-20.el9.x86_64 2026-03-10T09:12:03.288 INFO:teuthology.orchestra.run.vm09.stdout: socat-1.7.4.1-8.el9.x86_64 2026-03-10T09:12:03.288 INFO:teuthology.orchestra.run.vm09.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-10T09:12:03.288 INFO:teuthology.orchestra.run.vm09.stdout: unzip-6.0-59.el9.x86_64 2026-03-10T09:12:03.288 INFO:teuthology.orchestra.run.vm09.stdout: xmlstarlet-1.6.1-20.el9.x86_64 2026-03-10T09:12:03.288 INFO:teuthology.orchestra.run.vm09.stdout: zip-3.0-35.el9.x86_64 2026-03-10T09:12:03.288 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:12:03.288 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T09:12:03.376 DEBUG:teuthology.parallel:result is None 2026-03-10T09:12:03.376 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T09:12:03.997 DEBUG:teuthology.orchestra.run.vm09:> rpm -q ceph --qf '%{VERSION}-%{RELEASE}' 2026-03-10T09:12:04.017 INFO:teuthology.orchestra.run.vm09.stdout:19.2.3-678.ge911bdeb.el9 2026-03-10T09:12:04.017 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678.ge911bdeb.el9 2026-03-10T09:12:04.017 INFO:teuthology.task.install:The correct ceph version 19.2.3-678.ge911bdeb is installed. 2026-03-10T09:12:04.018 INFO:teuthology.task.install.util:Shipping valgrind.supp... 2026-03-10T09:12:04.018 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T09:12:04.018 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-10T09:12:04.085 INFO:teuthology.task.install.util:Shipping 'daemon-helper'... 2026-03-10T09:12:04.086 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T09:12:04.086 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/usr/bin/daemon-helper 2026-03-10T09:12:04.150 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-10T09:12:04.218 INFO:teuthology.task.install.util:Shipping 'adjust-ulimits'... 2026-03-10T09:12:04.218 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T09:12:04.218 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-10T09:12:04.281 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-10T09:12:04.343 INFO:teuthology.task.install.util:Shipping 'stdin-killer'... 2026-03-10T09:12:04.344 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T09:12:04.344 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/usr/bin/stdin-killer 2026-03-10T09:12:04.406 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-10T09:12:04.469 INFO:teuthology.run_tasks:Running task cephadm... 2026-03-10T09:12:04.513 INFO:tasks.cephadm:Config: {'conf': {'global': {'mon election default strategy': 1}, 'mgr': {'debug mgr': 20, 'debug ms': 1, 'mgr/cephadm/use_agent': False}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20}, 'osd': {'debug ms': 1, 'debug osd': 20, 'osd mclock iops capacity threshold hdd': 49000}}, 'flavor': 'default', 'log-ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', 'CEPHADM_FAILED_DAEMON'], 'log-only-match': ['CEPHADM_'], 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'} 2026-03-10T09:12:04.514 INFO:tasks.cephadm:Cluster image is quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T09:12:04.514 INFO:tasks.cephadm:Cluster fsid is 349a7c12-1c61-11f1-8c28-6d0db3d11b76 2026-03-10T09:12:04.514 INFO:tasks.cephadm:Choosing monitor IPs and ports... 2026-03-10T09:12:04.514 INFO:tasks.cephadm:Monitor IPs: {'mon.a': '192.168.123.109'} 2026-03-10T09:12:04.514 INFO:tasks.cephadm:First mon is mon.a on vm09 2026-03-10T09:12:04.514 INFO:tasks.cephadm:First mgr is a 2026-03-10T09:12:04.514 INFO:tasks.cephadm:Normalizing hostnames... 2026-03-10T09:12:04.514 DEBUG:teuthology.orchestra.run.vm09:> sudo hostname $(hostname -s) 2026-03-10T09:12:04.536 INFO:tasks.cephadm:Downloading "compiled" cephadm from cachra 2026-03-10T09:12:04.537 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T09:12:05.119 INFO:tasks.cephadm:builder_project result: [{'url': 'https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/', 'chacra_url': 'https://3.chacra.ceph.com/repos/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/', 'ref': 'squid', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'distro': 'centos', 'distro_version': '9', 'distro_codename': None, 'modified': '2026-02-25 18:55:15.146628', 'status': 'ready', 'flavor': 'default', 'project': 'ceph', 'archs': ['source', 'x86_64'], 'extra': {'version': '19.2.3-678-ge911bdeb', 'package_manager_version': '19.2.3-678.ge911bdeb', 'build_url': 'https://jenkins.ceph.com/job/ceph-dev-pipeline/3275/', 'root_build_cause': '', 'node_name': '10.20.192.26+soko16', 'job_name': 'ceph-dev-pipeline'}}] 2026-03-10T09:12:05.771 INFO:tasks.util.chacra:got chacra host 3.chacra.ceph.com, ref squid, sha1 e911bdebe5c8faa3800735d1568fcdca65db60df from https://shaman.ceph.com/api/search/?project=ceph&distros=centos%2F9%2Fx86_64&flavor=default&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T09:12:05.772 INFO:tasks.cephadm:Discovered cachra url: https://3.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/x86_64/flavors/default/cephadm 2026-03-10T09:12:05.772 INFO:tasks.cephadm:Downloading cephadm from url: https://3.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/x86_64/flavors/default/cephadm 2026-03-10T09:12:05.772 DEBUG:teuthology.orchestra.run.vm09:> curl --silent -L https://3.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-10T09:12:07.182 INFO:teuthology.orchestra.run.vm09.stdout:-rw-r--r--. 1 ubuntu ubuntu 788355 Mar 10 09:12 /home/ubuntu/cephtest/cephadm 2026-03-10T09:12:07.182 DEBUG:teuthology.orchestra.run.vm09:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-10T09:12:07.200 INFO:tasks.cephadm:Pulling image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df on all hosts... 2026-03-10T09:12:07.200 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-10T09:12:07.390 INFO:teuthology.orchestra.run.vm09.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T09:12:38.945 INFO:teuthology.orchestra.run.vm09.stdout:{ 2026-03-10T09:12:38.945 INFO:teuthology.orchestra.run.vm09.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-10T09:12:38.945 INFO:teuthology.orchestra.run.vm09.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-10T09:12:38.945 INFO:teuthology.orchestra.run.vm09.stdout: "repo_digests": [ 2026-03-10T09:12:38.945 INFO:teuthology.orchestra.run.vm09.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-10T09:12:38.945 INFO:teuthology.orchestra.run.vm09.stdout: ] 2026-03-10T09:12:38.945 INFO:teuthology.orchestra.run.vm09.stdout:} 2026-03-10T09:12:38.964 DEBUG:teuthology.orchestra.run.vm09:> sudo mkdir -p /etc/ceph 2026-03-10T09:12:38.994 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod 777 /etc/ceph 2026-03-10T09:12:39.061 INFO:tasks.cephadm:Writing seed config... 2026-03-10T09:12:39.062 INFO:tasks.cephadm: override: [global] mon election default strategy = 1 2026-03-10T09:12:39.062 INFO:tasks.cephadm: override: [mgr] debug mgr = 20 2026-03-10T09:12:39.062 INFO:tasks.cephadm: override: [mgr] debug ms = 1 2026-03-10T09:12:39.062 INFO:tasks.cephadm: override: [mgr] mgr/cephadm/use_agent = False 2026-03-10T09:12:39.062 INFO:tasks.cephadm: override: [mon] debug mon = 20 2026-03-10T09:12:39.062 INFO:tasks.cephadm: override: [mon] debug ms = 1 2026-03-10T09:12:39.062 INFO:tasks.cephadm: override: [mon] debug paxos = 20 2026-03-10T09:12:39.062 INFO:tasks.cephadm: override: [osd] debug ms = 1 2026-03-10T09:12:39.062 INFO:tasks.cephadm: override: [osd] debug osd = 20 2026-03-10T09:12:39.062 INFO:tasks.cephadm: override: [osd] osd mclock iops capacity threshold hdd = 49000 2026-03-10T09:12:39.062 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T09:12:39.062 DEBUG:teuthology.orchestra.run.vm09:> dd of=/home/ubuntu/cephtest/seed.ceph.conf 2026-03-10T09:12:39.118 DEBUG:tasks.cephadm:Final config: [global] # make logging friendly to teuthology log_to_file = true log_to_stderr = false log to journald = false mon cluster log to file = true mon cluster log file level = debug mon clock drift allowed = 1.000 # replicate across OSDs, not hosts osd crush chooseleaf type = 0 #osd pool default size = 2 osd pool default erasure code profile = plugin=jerasure technique=reed_sol_van k=2 m=1 crush-failure-domain=osd # enable some debugging auth debug = true ms die on old message = true ms die on bug = true debug asserts on shutdown = true # adjust warnings mon max pg per osd = 10000# >= luminous mon pg warn max object skew = 0 mon osd allow primary affinity = true mon osd allow pg remap = true mon warn on legacy crush tunables = false mon warn on crush straw calc version zero = false mon warn on no sortbitwise = false mon warn on osd down out interval zero = false mon warn on too few osds = false mon_warn_on_pool_pg_num_not_power_of_two = false # disable pg_autoscaler by default for new pools osd_pool_default_pg_autoscale_mode = off # tests delete pools mon allow pool delete = true fsid = 349a7c12-1c61-11f1-8c28-6d0db3d11b76 mon election default strategy = 1 [osd] osd scrub load threshold = 5.0 osd scrub max interval = 600 osd mclock profile = high_recovery_ops osd recover clone overlap = true osd recovery max chunk = 1048576 osd deep scrub update digest min age = 30 osd map max advance = 10 osd memory target autotune = true # debugging osd debug shutdown = true osd debug op order = true osd debug verify stray on activate = true osd debug pg log writeout = true osd debug verify cached snaps = true osd debug verify missing on start = true osd debug misdirected ops = true osd op queue = debug_random osd op queue cut off = debug_random osd shutdown pgref assert = true bdev debug aio = true osd sloppy crc = true debug ms = 1 debug osd = 20 osd mclock iops capacity threshold hdd = 49000 [mgr] mon reweight min pgs per osd = 4 mon reweight min bytes per osd = 10 mgr/telemetry/nag = false debug mgr = 20 debug ms = 1 mgr/cephadm/use_agent = False [mon] mon data avail warn = 5 mon mgr mkfs grace = 240 mon reweight min pgs per osd = 4 mon osd reporter subtree level = osd mon osd prime pg temp = true mon reweight min bytes per osd = 10 # rotate auth tickets quickly to exercise renewal paths auth mon ticket ttl = 660# 11m auth service ticket ttl = 240# 4m # don't complain about global id reclaim mon_warn_on_insecure_global_id_reclaim = false mon_warn_on_insecure_global_id_reclaim_allowed = false debug mon = 20 debug ms = 1 debug paxos = 20 [client.rgw] rgw cache enabled = true rgw enable ops log = true rgw enable usage log = true 2026-03-10T09:12:39.118 DEBUG:teuthology.orchestra.run.vm09:mon.a> sudo journalctl -f -n 0 -u ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@mon.a.service 2026-03-10T09:12:39.160 DEBUG:teuthology.orchestra.run.vm09:mgr.a> sudo journalctl -f -n 0 -u ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@mgr.a.service 2026-03-10T09:12:39.203 INFO:tasks.cephadm:Bootstrapping... 2026-03-10T09:12:39.203 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df -v bootstrap --fsid 349a7c12-1c61-11f1-8c28-6d0db3d11b76 --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id a --orphan-initial-daemons --skip-monitoring-stack --mon-ip 192.168.123.109 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring 2026-03-10T09:12:39.345 INFO:teuthology.orchestra.run.vm09.stdout:-------------------------------------------------------------------------------- 2026-03-10T09:12:39.346 INFO:teuthology.orchestra.run.vm09.stdout:cephadm ['--image', 'quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df', '-v', 'bootstrap', '--fsid', '349a7c12-1c61-11f1-8c28-6d0db3d11b76', '--config', '/home/ubuntu/cephtest/seed.ceph.conf', '--output-config', '/etc/ceph/ceph.conf', '--output-keyring', '/etc/ceph/ceph.client.admin.keyring', '--output-pub-ssh-key', '/home/ubuntu/cephtest/ceph.pub', '--mon-id', 'a', '--mgr-id', 'a', '--orphan-initial-daemons', '--skip-monitoring-stack', '--mon-ip', '192.168.123.109', '--skip-admin-label'] 2026-03-10T09:12:39.346 INFO:teuthology.orchestra.run.vm09.stderr:Specifying an fsid for your cluster offers no advantages and may increase the likelihood of fsid conflicts. 2026-03-10T09:12:39.346 INFO:teuthology.orchestra.run.vm09.stdout:Verifying podman|docker is present... 2026-03-10T09:12:39.366 INFO:teuthology.orchestra.run.vm09.stdout:/bin/podman: stdout 5.8.0 2026-03-10T09:12:39.366 INFO:teuthology.orchestra.run.vm09.stdout:Verifying lvm2 is present... 2026-03-10T09:12:39.367 INFO:teuthology.orchestra.run.vm09.stdout:Verifying time synchronization is in place... 2026-03-10T09:12:39.374 INFO:teuthology.orchestra.run.vm09.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-10T09:12:39.374 INFO:teuthology.orchestra.run.vm09.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-10T09:12:39.381 INFO:teuthology.orchestra.run.vm09.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-10T09:12:39.381 INFO:teuthology.orchestra.run.vm09.stdout:systemctl: stdout inactive 2026-03-10T09:12:39.386 INFO:teuthology.orchestra.run.vm09.stdout:systemctl: stdout enabled 2026-03-10T09:12:39.392 INFO:teuthology.orchestra.run.vm09.stdout:systemctl: stdout active 2026-03-10T09:12:39.392 INFO:teuthology.orchestra.run.vm09.stdout:Unit chronyd.service is enabled and running 2026-03-10T09:12:39.392 INFO:teuthology.orchestra.run.vm09.stdout:Repeating the final host check... 2026-03-10T09:12:39.411 INFO:teuthology.orchestra.run.vm09.stdout:/bin/podman: stdout 5.8.0 2026-03-10T09:12:39.411 INFO:teuthology.orchestra.run.vm09.stdout:podman (/bin/podman) version 5.8.0 is present 2026-03-10T09:12:39.411 INFO:teuthology.orchestra.run.vm09.stdout:systemctl is present 2026-03-10T09:12:39.411 INFO:teuthology.orchestra.run.vm09.stdout:lvcreate is present 2026-03-10T09:12:39.416 INFO:teuthology.orchestra.run.vm09.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-10T09:12:39.417 INFO:teuthology.orchestra.run.vm09.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-10T09:12:39.422 INFO:teuthology.orchestra.run.vm09.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-10T09:12:39.422 INFO:teuthology.orchestra.run.vm09.stdout:systemctl: stdout inactive 2026-03-10T09:12:39.428 INFO:teuthology.orchestra.run.vm09.stdout:systemctl: stdout enabled 2026-03-10T09:12:39.433 INFO:teuthology.orchestra.run.vm09.stdout:systemctl: stdout active 2026-03-10T09:12:39.433 INFO:teuthology.orchestra.run.vm09.stdout:Unit chronyd.service is enabled and running 2026-03-10T09:12:39.433 INFO:teuthology.orchestra.run.vm09.stdout:Host looks OK 2026-03-10T09:12:39.433 INFO:teuthology.orchestra.run.vm09.stdout:Cluster fsid: 349a7c12-1c61-11f1-8c28-6d0db3d11b76 2026-03-10T09:12:39.433 INFO:teuthology.orchestra.run.vm09.stdout:Acquiring lock 140247581259568 on /run/cephadm/349a7c12-1c61-11f1-8c28-6d0db3d11b76.lock 2026-03-10T09:12:39.433 INFO:teuthology.orchestra.run.vm09.stdout:Lock 140247581259568 acquired on /run/cephadm/349a7c12-1c61-11f1-8c28-6d0db3d11b76.lock 2026-03-10T09:12:39.434 INFO:teuthology.orchestra.run.vm09.stdout:Verifying IP 192.168.123.109 port 3300 ... 2026-03-10T09:12:39.434 INFO:teuthology.orchestra.run.vm09.stdout:Verifying IP 192.168.123.109 port 6789 ... 2026-03-10T09:12:39.434 INFO:teuthology.orchestra.run.vm09.stdout:Base mon IP(s) is [192.168.123.109:3300, 192.168.123.109:6789], mon addrv is [v2:192.168.123.109:3300,v1:192.168.123.109:6789] 2026-03-10T09:12:39.437 INFO:teuthology.orchestra.run.vm09.stdout:/sbin/ip: stdout default via 192.168.123.1 dev eth0 proto dhcp src 192.168.123.109 metric 100 2026-03-10T09:12:39.437 INFO:teuthology.orchestra.run.vm09.stdout:/sbin/ip: stdout 192.168.123.0/24 dev eth0 proto kernel scope link src 192.168.123.109 metric 100 2026-03-10T09:12:39.440 INFO:teuthology.orchestra.run.vm09.stdout:/sbin/ip: stdout ::1 dev lo proto kernel metric 256 pref medium 2026-03-10T09:12:39.440 INFO:teuthology.orchestra.run.vm09.stdout:/sbin/ip: stdout fe80::/64 dev eth0 proto kernel metric 1024 pref medium 2026-03-10T09:12:39.442 INFO:teuthology.orchestra.run.vm09.stdout:/sbin/ip: stdout 1: lo: mtu 65536 state UNKNOWN qlen 1000 2026-03-10T09:12:39.442 INFO:teuthology.orchestra.run.vm09.stdout:/sbin/ip: stdout inet6 ::1/128 scope host 2026-03-10T09:12:39.442 INFO:teuthology.orchestra.run.vm09.stdout:/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-10T09:12:39.442 INFO:teuthology.orchestra.run.vm09.stdout:/sbin/ip: stdout 2: eth0: mtu 1500 state UP qlen 1000 2026-03-10T09:12:39.442 INFO:teuthology.orchestra.run.vm09.stdout:/sbin/ip: stdout inet6 fe80::5055:ff:fe00:9/64 scope link noprefixroute 2026-03-10T09:12:39.442 INFO:teuthology.orchestra.run.vm09.stdout:/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-10T09:12:39.443 INFO:teuthology.orchestra.run.vm09.stdout:Mon IP `192.168.123.109` is in CIDR network `192.168.123.0/24` 2026-03-10T09:12:39.443 INFO:teuthology.orchestra.run.vm09.stdout:Mon IP `192.168.123.109` is in CIDR network `192.168.123.0/24` 2026-03-10T09:12:39.443 INFO:teuthology.orchestra.run.vm09.stdout:Inferred mon public CIDR from local network configuration ['192.168.123.0/24', '192.168.123.0/24'] 2026-03-10T09:12:39.443 INFO:teuthology.orchestra.run.vm09.stdout:Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network 2026-03-10T09:12:39.444 INFO:teuthology.orchestra.run.vm09.stdout:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T09:12:40.700 INFO:teuthology.orchestra.run.vm09.stdout:/bin/podman: stdout 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c 2026-03-10T09:12:40.700 INFO:teuthology.orchestra.run.vm09.stdout:/bin/podman: stderr Trying to pull quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T09:12:40.700 INFO:teuthology.orchestra.run.vm09.stdout:/bin/podman: stderr Getting image source signatures 2026-03-10T09:12:40.700 INFO:teuthology.orchestra.run.vm09.stdout:/bin/podman: stderr Copying blob sha256:1752b8d01aa0dd33bbe0ab24e8316174c94fbdcd5d26252e2680bba0624747a7 2026-03-10T09:12:40.700 INFO:teuthology.orchestra.run.vm09.stdout:/bin/podman: stderr Copying blob sha256:8e380faede39ebd4286247457b408d979ab568aafd8389c42ec304b8cfba4e92 2026-03-10T09:12:40.700 INFO:teuthology.orchestra.run.vm09.stdout:/bin/podman: stderr Copying config sha256:654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c 2026-03-10T09:12:40.700 INFO:teuthology.orchestra.run.vm09.stdout:/bin/podman: stderr Writing manifest to image destination 2026-03-10T09:12:40.847 INFO:teuthology.orchestra.run.vm09.stdout:ceph: stdout ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-10T09:12:40.847 INFO:teuthology.orchestra.run.vm09.stdout:Ceph version: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-10T09:12:40.847 INFO:teuthology.orchestra.run.vm09.stdout:Extracting ceph user uid/gid from container image... 2026-03-10T09:12:40.977 INFO:teuthology.orchestra.run.vm09.stdout:stat: stdout 167 167 2026-03-10T09:12:40.977 INFO:teuthology.orchestra.run.vm09.stdout:Creating initial keys... 2026-03-10T09:12:41.088 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph-authtool: stdout AQCJ4K9p/YEKAxAA0nw5BExAIPsB2xmZ62xxnw== 2026-03-10T09:12:41.198 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph-authtool: stdout AQCJ4K9pobCnCRAADlmW02By2XcTmq+dtPVPug== 2026-03-10T09:12:41.282 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph-authtool: stdout AQCJ4K9pjfgJEBAAZhU3ZnOMvm7XNgd7ebJKmg== 2026-03-10T09:12:41.282 INFO:teuthology.orchestra.run.vm09.stdout:Creating initial monmap... 2026-03-10T09:12:41.378 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-10T09:12:41.378 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/monmaptool: stdout setting min_mon_release = quincy 2026-03-10T09:12:41.378 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: set fsid to 349a7c12-1c61-11f1-8c28-6d0db3d11b76 2026-03-10T09:12:41.378 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-10T09:12:41.379 INFO:teuthology.orchestra.run.vm09.stdout:monmaptool for a [v2:192.168.123.109:3300,v1:192.168.123.109:6789] on /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-10T09:12:41.379 INFO:teuthology.orchestra.run.vm09.stdout:setting min_mon_release = quincy 2026-03-10T09:12:41.379 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/monmaptool: set fsid to 349a7c12-1c61-11f1-8c28-6d0db3d11b76 2026-03-10T09:12:41.379 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-10T09:12:41.379 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:12:41.379 INFO:teuthology.orchestra.run.vm09.stdout:Creating mon... 2026-03-10T09:12:41.519 INFO:teuthology.orchestra.run.vm09.stdout:create mon.a on 2026-03-10T09:12:41.663 INFO:teuthology.orchestra.run.vm09.stdout:systemctl: stderr Removed "/etc/systemd/system/multi-user.target.wants/ceph.target". 2026-03-10T09:12:41.781 INFO:teuthology.orchestra.run.vm09.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /etc/systemd/system/ceph.target. 2026-03-10T09:12:41.906 INFO:teuthology.orchestra.run.vm09.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76.target → /etc/systemd/system/ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76.target. 2026-03-10T09:12:41.906 INFO:teuthology.orchestra.run.vm09.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph.target.wants/ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76.target → /etc/systemd/system/ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76.target. 2026-03-10T09:12:42.060 INFO:teuthology.orchestra.run.vm09.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@mon.a 2026-03-10T09:12:42.060 INFO:teuthology.orchestra.run.vm09.stdout:systemctl: stderr Failed to reset failed state of unit ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@mon.a.service: Unit ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@mon.a.service not loaded. 2026-03-10T09:12:42.201 INFO:teuthology.orchestra.run.vm09.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76.target.wants/ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@mon.a.service → /etc/systemd/system/ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@.service. 2026-03-10T09:12:42.385 INFO:teuthology.orchestra.run.vm09.stdout:firewalld does not appear to be present 2026-03-10T09:12:42.385 INFO:teuthology.orchestra.run.vm09.stdout:Not possible to enable service . firewalld.service is not available 2026-03-10T09:12:42.385 INFO:teuthology.orchestra.run.vm09.stdout:Waiting for mon to start... 2026-03-10T09:12:42.385 INFO:teuthology.orchestra.run.vm09.stdout:Waiting for mon... 2026-03-10T09:12:42.591 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout cluster: 2026-03-10T09:12:42.591 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout id: 349a7c12-1c61-11f1-8c28-6d0db3d11b76 2026-03-10T09:12:42.591 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout health: HEALTH_OK 2026-03-10T09:12:42.591 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout 2026-03-10T09:12:42.591 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout services: 2026-03-10T09:12:42.591 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout mon: 1 daemons, quorum a (age 0.147428s) 2026-03-10T09:12:42.591 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout mgr: no daemons active 2026-03-10T09:12:42.591 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout osd: 0 osds: 0 up, 0 in 2026-03-10T09:12:42.591 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout 2026-03-10T09:12:42.591 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout data: 2026-03-10T09:12:42.591 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout pools: 0 pools, 0 pgs 2026-03-10T09:12:42.591 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout objects: 0 objects, 0 B 2026-03-10T09:12:42.591 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout usage: 0 B used, 0 B / 0 B avail 2026-03-10T09:12:42.592 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout pgs: 2026-03-10T09:12:42.592 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout 2026-03-10T09:12:42.592 INFO:teuthology.orchestra.run.vm09.stdout:mon is available 2026-03-10T09:12:42.592 INFO:teuthology.orchestra.run.vm09.stdout:Assimilating anything we can from ceph.conf... 2026-03-10T09:12:42.789 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout 2026-03-10T09:12:42.789 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout [global] 2026-03-10T09:12:42.789 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout fsid = 349a7c12-1c61-11f1-8c28-6d0db3d11b76 2026-03-10T09:12:42.789 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-10T09:12:42.789 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.109:3300,v1:192.168.123.109:6789] 2026-03-10T09:12:42.789 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-10T09:12:42.789 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-10T09:12:42.789 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-10T09:12:42.789 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-10T09:12:42.789 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout 2026-03-10T09:12:42.789 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-10T09:12:42.789 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout mgr/cephadm/use_agent = False 2026-03-10T09:12:42.789 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-10T09:12:42.789 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout 2026-03-10T09:12:42.789 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout [osd] 2026-03-10T09:12:42.789 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-10T09:12:42.789 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-10T09:12:42.789 INFO:teuthology.orchestra.run.vm09.stdout:Generating new minimal ceph.conf... 2026-03-10T09:12:42.980 INFO:teuthology.orchestra.run.vm09.stdout:Restarting the monitor... 2026-03-10T09:12:43.128 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mon-a[49357]: 2026-03-10T09:12:43.067+0000 7fe51d459640 -1 mon.a@0(leader) e1 *** Got Signal Terminated *** 2026-03-10T09:12:43.348 INFO:teuthology.orchestra.run.vm09.stdout:Setting public_network to 192.168.123.0/24 in mon config section 2026-03-10T09:12:43.384 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 podman[49558]: 2026-03-10 09:12:43.131408797 +0000 UTC m=+0.078552111 container died 387198798e83f7e67c6bcc04101b76829de071781451c6fe8344ec7aa9f115fb (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mon-a, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-10T09:12:43.384 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 podman[49558]: 2026-03-10 09:12:43.146359457 +0000 UTC m=+0.093502781 container remove 387198798e83f7e67c6bcc04101b76829de071781451c6fe8344ec7aa9f115fb (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mon-a, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20260223, ceph=True, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-10T09:12:43.384 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 bash[49558]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mon-a 2026-03-10T09:12:43.384 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 systemd[1]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@mon.a.service: Deactivated successfully. 2026-03-10T09:12:43.384 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 systemd[1]: Stopped Ceph mon.a for 349a7c12-1c61-11f1-8c28-6d0db3d11b76. 2026-03-10T09:12:43.384 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 systemd[1]: Starting Ceph mon.a for 349a7c12-1c61-11f1-8c28-6d0db3d11b76... 2026-03-10T09:12:43.384 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 podman[49629]: 2026-03-10 09:12:43.304172562 +0000 UTC m=+0.015249320 container create 098843f55167c7e172389a65638e216bab6e90de7771a2eba638f118cbc10698 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mon-a, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T09:12:43.384 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 podman[49629]: 2026-03-10 09:12:43.337653249 +0000 UTC m=+0.048730018 container init 098843f55167c7e172389a65638e216bab6e90de7771a2eba638f118cbc10698 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mon-a, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-10T09:12:43.384 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 podman[49629]: 2026-03-10 09:12:43.340977888 +0000 UTC m=+0.052054657 container start 098843f55167c7e172389a65638e216bab6e90de7771a2eba638f118cbc10698 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mon-a, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-10T09:12:43.384 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 bash[49629]: 098843f55167c7e172389a65638e216bab6e90de7771a2eba638f118cbc10698 2026-03-10T09:12:43.384 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 podman[49629]: 2026-03-10 09:12:43.297792465 +0000 UTC m=+0.008869224 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T09:12:43.384 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 systemd[1]: Started Ceph mon.a for 349a7c12-1c61-11f1-8c28-6d0db3d11b76. 2026-03-10T09:12:43.384 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: set uid:gid to 167:167 (ceph:ceph) 2026-03-10T09:12:43.384 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 2 2026-03-10T09:12:43.384 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: pidfile_write: ignore empty --pid-file 2026-03-10T09:12:43.384 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: load: jerasure load: lrc 2026-03-10T09:12:43.384 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: RocksDB version: 7.9.2 2026-03-10T09:12:43.384 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Git sha 0 2026-03-10T09:12:43.384 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Compile date 2026-02-25 18:11:04 2026-03-10T09:12:43.384 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: DB SUMMARY 2026-03-10T09:12:43.384 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: DB Session ID: LUIAOKK85VMKP4BF5DIO 2026-03-10T09:12:43.384 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: CURRENT file: CURRENT 2026-03-10T09:12:43.384 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: IDENTITY file: IDENTITY 2026-03-10T09:12:43.384 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: MANIFEST file: MANIFEST-000010 size: 179 Bytes 2026-03-10T09:12:43.384 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 1, files: 000008.sst 2026-03-10T09:12:43.384 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 000009.log size: 75535 ; 2026-03-10T09:12:43.384 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.error_if_exists: 0 2026-03-10T09:12:43.384 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.create_if_missing: 0 2026-03-10T09:12:43.384 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.paranoid_checks: 1 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.flush_verify_memtable_count: 1 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.env: 0x560faca83dc0 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.fs: PosixFileSystem 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.info_log: 0x560fad4b6700 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.max_file_opening_threads: 16 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.statistics: (nil) 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.use_fsync: 0 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.max_log_file_size: 0 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.keep_log_file_num: 1000 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.recycle_log_file_num: 0 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.allow_fallocate: 1 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.allow_mmap_reads: 0 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.allow_mmap_writes: 0 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.use_direct_reads: 0 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.create_missing_column_families: 0 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.db_log_dir: 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.wal_dir: 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.advise_random_on_open: 1 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.db_write_buffer_size: 0 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.write_buffer_manager: 0x560fad4bb900 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.rate_limiter: (nil) 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.wal_recovery_mode: 2 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.enable_thread_tracking: 0 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.enable_pipelined_write: 0 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.unordered_write: 0 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.row_cache: None 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.wal_filter: None 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.allow_ingest_behind: 0 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.two_write_queues: 0 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.manual_wal_flush: 0 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.wal_compression: 0 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.atomic_flush: 0 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T09:12:43.385 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.log_readahead_size: 0 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.best_efforts_recovery: 0 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.allow_data_in_errors: 0 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.db_host_id: __hostname__ 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.enforce_single_del_contracts: true 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.max_background_jobs: 2 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.max_background_compactions: -1 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.max_subcompactions: 1 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.max_total_wal_size: 0 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.max_open_files: -1 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.bytes_per_sync: 0 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.compaction_readahead_size: 0 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.max_background_flushes: -1 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Compression algorithms supported: 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: kZSTD supported: 0 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: kXpressCompression supported: 0 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: kBZip2Compression supported: 0 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: kLZ4Compression supported: 1 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: kZlibCompression supported: 1 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: kLZ4HCCompression supported: 1 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: kSnappyCompression supported: 1 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: DMutex implementation: pthread_mutex_t 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.merge_operator: 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.compaction_filter: None 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.compaction_filter_factory: None 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.sst_partitioner_factory: None 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fad4b6640) 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout: cache_index_and_filter_blocks: 1 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout: pin_top_level_index_and_filter: 1 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout: index_type: 0 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout: data_block_index_type: 0 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout: index_shortening: 1 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout: data_block_hash_table_util_ratio: 0.750000 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout: checksum: 4 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout: no_block_cache: 0 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout: block_cache: 0x560fad4db350 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout: block_cache_name: BinnedLRUCache 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout: block_cache_options: 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout: capacity : 536870912 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout: num_shard_bits : 4 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout: strict_capacity_limit : 0 2026-03-10T09:12:43.386 INFO:journalctl@ceph.mon.a.vm09.stdout: high_pri_pool_ratio: 0.000 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout: block_cache_compressed: (nil) 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout: persistent_cache: (nil) 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout: block_size: 4096 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout: block_size_deviation: 10 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout: block_restart_interval: 16 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout: index_block_restart_interval: 1 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout: metadata_block_size: 4096 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout: partition_filters: 0 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout: use_delta_encoding: 1 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout: filter_policy: bloomfilter 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout: whole_key_filtering: 1 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout: verify_compression: 0 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout: read_amp_bytes_per_bit: 0 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout: format_version: 5 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout: enable_index_compression: 1 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout: block_align: 0 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout: max_auto_readahead_size: 262144 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout: prepopulate_block_cache: 0 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout: initial_auto_readahead_size: 8192 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout: num_file_reads_for_auto_readahead: 2 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.write_buffer_size: 33554432 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.max_write_buffer_number: 2 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.compression: NoCompression 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.bottommost_compression: Disabled 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.prefix_extractor: nullptr 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.num_levels: 7 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.compression_opts.level: 32767 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.compression_opts.strategy: 0 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.compression_opts.enabled: false 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.target_file_size_base: 67108864 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T09:12:43.387 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.arena_block_size: 1048576 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.disable_auto_compactions: 0 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.inplace_update_support: 0 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.bloom_locality: 0 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.max_successive_merges: 0 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.paranoid_file_checks: 0 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.force_consistency_checks: 1 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.report_bg_io_stats: 0 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.ttl: 2592000 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.enable_blob_files: false 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.min_blob_size: 0 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.blob_file_size: 268435456 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.blob_file_starting_level: 0 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 7eb0c5ee-6cc1-49e4-9b8d-70ca3c146bfd 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773133963367454, "job": 1, "event": "recovery_started", "wal_files": [9]} 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773133963368905, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 72616, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 225, "table_properties": {"data_size": 70895, "index_size": 174, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 517, "raw_key_size": 9705, "raw_average_key_size": 49, "raw_value_size": 65374, "raw_average_value_size": 333, "num_data_blocks": 8, "num_entries": 196, "num_filter_entries": 196, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773133963, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7eb0c5ee-6cc1-49e4-9b8d-70ca3c146bfd", "db_session_id": "LUIAOKK85VMKP4BF5DIO", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}} 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773133963368964, "job": 1, "event": "recovery_finished"} 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: [db/version_set.cc:5047] Creating manifest 15 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-a/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x560fad4dce00 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: DB pointer 0x560fad5f2000 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-10T09:12:43.388 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout: ** DB Stats ** 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout: 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout: ** Compaction Stats [default] ** 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout: L0 2/0 72.77 KB 0.5 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 55.4 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout: Sum 2/0 72.77 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 55.4 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 55.4 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout: 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout: ** Compaction Stats [default] ** 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 55.4 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout: 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout: 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout: Flush(GB): cumulative 0.000, interval 0.000 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout: AddFile(Total Files): cumulative 0, interval 0 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout: AddFile(L0 Files): cumulative 0, interval 0 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout: AddFile(Keys): cumulative 0, interval 0 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout: Cumulative compaction: 0.00 GB write, 7.69 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout: Interval compaction: 0.00 GB write, 7.69 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout: Block cache BinnedLRUCache@0x560fad4db350#2 capacity: 512.00 MB usage: 26.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 9e-06 secs_since: 0 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout: Block cache entry stats(count,size,portion): DataBlock(3,25.11 KB,0.00478923%) FilterBlock(2,0.70 KB,0.00013411%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%) 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout: 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout: ** File Read Latency Histogram By Level [default] ** 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: starting mon.a rank 0 at public addrs [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] at bind addrs [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon_data /var/lib/ceph/mon/ceph-a fsid 349a7c12-1c61-11f1-8c28-6d0db3d11b76 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: mon.a@-1(???) e1 preinit fsid 349a7c12-1c61-11f1-8c28-6d0db3d11b76 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: mon.a@-1(???).mds e1 new map 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: mon.a@-1(???).mds e1 print_map 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout: e1 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout: btime 2026-03-10T09:12:42:416288+0000 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout: enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout: legacy client fscid: -1 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout: 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout: No filesystems configured 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: mon.a@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T09:12:43.389 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: mon.a@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3 2026-03-10T09:12:43.552 INFO:teuthology.orchestra.run.vm09.stdout:Wrote config to /etc/ceph/ceph.conf 2026-03-10T09:12:43.553 INFO:teuthology.orchestra.run.vm09.stdout:Wrote keyring to /etc/ceph/ceph.client.admin.keyring 2026-03-10T09:12:43.553 INFO:teuthology.orchestra.run.vm09.stdout:Creating mgr... 2026-03-10T09:12:43.554 INFO:teuthology.orchestra.run.vm09.stdout:Verifying port 0.0.0.0:9283 ... 2026-03-10T09:12:43.554 INFO:teuthology.orchestra.run.vm09.stdout:Verifying port 0.0.0.0:8765 ... 2026-03-10T09:12:43.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T09:12:43.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: monmap epoch 1 2026-03-10T09:12:43.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: fsid 349a7c12-1c61-11f1-8c28-6d0db3d11b76 2026-03-10T09:12:43.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: last_changed 2026-03-10T09:12:41.364721+0000 2026-03-10T09:12:43.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: created 2026-03-10T09:12:41.364721+0000 2026-03-10T09:12:43.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: min_mon_release 19 (squid) 2026-03-10T09:12:43.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: election_strategy: 1 2026-03-10T09:12:43.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: 0: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.a 2026-03-10T09:12:43.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: fsmap 2026-03-10T09:12:43.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: osdmap e1: 0 total, 0 up, 0 in 2026-03-10T09:12:43.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:43 vm09 ceph-mon[49644]: mgrmap e1: no daemons active 2026-03-10T09:12:43.699 INFO:teuthology.orchestra.run.vm09.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@mgr.a 2026-03-10T09:12:43.699 INFO:teuthology.orchestra.run.vm09.stdout:systemctl: stderr Failed to reset failed state of unit ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@mgr.a.service: Unit ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@mgr.a.service not loaded. 2026-03-10T09:12:43.819 INFO:teuthology.orchestra.run.vm09.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76.target.wants/ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@mgr.a.service → /etc/systemd/system/ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@.service. 2026-03-10T09:12:43.980 INFO:teuthology.orchestra.run.vm09.stdout:firewalld does not appear to be present 2026-03-10T09:12:43.980 INFO:teuthology.orchestra.run.vm09.stdout:Not possible to enable service . firewalld.service is not available 2026-03-10T09:12:43.980 INFO:teuthology.orchestra.run.vm09.stdout:firewalld does not appear to be present 2026-03-10T09:12:43.980 INFO:teuthology.orchestra.run.vm09.stdout:Not possible to open ports <[9283, 8765]>. firewalld.service is not available 2026-03-10T09:12:43.980 INFO:teuthology.orchestra.run.vm09.stdout:Waiting for mgr to start... 2026-03-10T09:12:43.980 INFO:teuthology.orchestra.run.vm09.stdout:Waiting for mgr... 2026-03-10T09:12:44.222 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout 2026-03-10T09:12:44.222 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout { 2026-03-10T09:12:44.222 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "fsid": "349a7c12-1c61-11f1-8c28-6d0db3d11b76", 2026-03-10T09:12:44.222 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T09:12:44.222 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T09:12:44.222 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T09:12:44.222 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T09:12:44.222 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:12:44.222 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T09:12:44.222 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T09:12:44.223 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout 0 2026-03-10T09:12:44.223 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout ], 2026-03-10T09:12:44.223 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T09:12:44.223 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "a" 2026-03-10T09:12:44.223 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout ], 2026-03-10T09:12:44.223 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "quorum_age": 0, 2026-03-10T09:12:44.223 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T09:12:44.223 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T09:12:44.223 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T09:12:44.223 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T09:12:44.223 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:12:44.223 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T09:12:44.223 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T09:12:44.223 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T09:12:44.223 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T09:12:44.223 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T09:12:44.223 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T09:12:44.223 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T09:12:44.223 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T09:12:44.223 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:12:44.223 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T09:12:44.223 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T09:12:44.223 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T09:12:44.223 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T09:12:44.223 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T09:12:44.223 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T09:12:44.223 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T09:12:44.223 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T09:12:44.223 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T09:12:44.223 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:12:44.223 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T09:12:44.223 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T09:12:44.223 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T09:12:42:416288+0000", 2026-03-10T09:12:44.223 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T09:12:44.223 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T09:12:44.223 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:12:44.223 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T09:12:44.223 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-10T09:12:44.224 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T09:12:44.224 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T09:12:44.224 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T09:12:44.224 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T09:12:44.224 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T09:12:44.224 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout ], 2026-03-10T09:12:44.224 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T09:12:44.224 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:12:44.224 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T09:12:44.224 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T09:12:44.224 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T09:12:42.416929+0000", 2026-03-10T09:12:44.224 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T09:12:44.224 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:12:44.224 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T09:12:44.224 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout } 2026-03-10T09:12:44.224 INFO:teuthology.orchestra.run.vm09.stdout:mgr not available, waiting (1/15)... 2026-03-10T09:12:44.864 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:44 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/2437562932' entity='client.admin' 2026-03-10T09:12:44.865 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:44 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/4284329159' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T09:12:44.865 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:44 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:12:44.542+0000 7feceb48c140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T09:12:45.139 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:44 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:12:44.867+0000 7feceb48c140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T09:12:45.139 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:44 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T09:12:45.139 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:44 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T09:12:45.139 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:44 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: from numpy import show_config as show_numpy_config 2026-03-10T09:12:45.139 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:44 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:12:44.953+0000 7feceb48c140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T09:12:45.139 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:44 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:12:44.991+0000 7feceb48c140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T09:12:45.139 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:45 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:12:45.063+0000 7feceb48c140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T09:12:45.841 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:45 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:12:45.572+0000 7feceb48c140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T09:12:45.841 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:45 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:12:45.683+0000 7feceb48c140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T09:12:45.841 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:45 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:12:45.723+0000 7feceb48c140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T09:12:45.841 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:45 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:12:45.759+0000 7feceb48c140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T09:12:45.841 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:45 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:12:45.802+0000 7feceb48c140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T09:12:46.139 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:45 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:12:45.844+0000 7feceb48c140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T09:12:46.139 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:46 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:12:46.028+0000 7feceb48c140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T09:12:46.139 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:46 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:12:46.080+0000 7feceb48c140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T09:12:46.463 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout 2026-03-10T09:12:46.463 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout { 2026-03-10T09:12:46.463 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "fsid": "349a7c12-1c61-11f1-8c28-6d0db3d11b76", 2026-03-10T09:12:46.463 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T09:12:46.463 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T09:12:46.463 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T09:12:46.463 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T09:12:46.463 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:12:46.463 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T09:12:46.463 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T09:12:46.463 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout 0 2026-03-10T09:12:46.463 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout ], 2026-03-10T09:12:46.463 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T09:12:46.463 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "a" 2026-03-10T09:12:46.463 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout ], 2026-03-10T09:12:46.463 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "quorum_age": 3, 2026-03-10T09:12:46.463 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T09:12:46.463 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T09:12:46.463 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T09:12:46.463 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T09:12:46.463 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:12:46.463 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T09:12:46.463 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T09:12:46.463 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T09:12:46.463 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T09:12:46.463 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T09:12:46.464 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T09:12:46.464 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T09:12:46.464 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T09:12:46.464 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:12:46.464 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T09:12:46.464 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T09:12:46.464 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T09:12:46.464 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T09:12:46.464 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T09:12:46.464 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T09:12:46.464 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T09:12:46.464 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T09:12:46.464 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T09:12:46.464 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:12:46.464 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T09:12:46.464 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T09:12:46.464 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T09:12:42:416288+0000", 2026-03-10T09:12:46.464 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T09:12:46.464 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T09:12:46.464 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:12:46.464 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T09:12:46.465 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-10T09:12:46.465 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T09:12:46.465 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T09:12:46.465 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T09:12:46.465 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T09:12:46.465 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T09:12:46.465 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout ], 2026-03-10T09:12:46.465 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T09:12:46.465 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:12:46.465 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T09:12:46.465 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T09:12:46.465 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T09:12:42.416929+0000", 2026-03-10T09:12:46.465 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T09:12:46.465 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:12:46.465 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T09:12:46.465 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout } 2026-03-10T09:12:46.465 INFO:teuthology.orchestra.run.vm09.stdout:mgr not available, waiting (2/15)... 2026-03-10T09:12:46.612 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:46 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/2019552952' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T09:12:46.612 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:46 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:12:46.308+0000 7feceb48c140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T09:12:46.889 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:46 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:12:46.615+0000 7feceb48c140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T09:12:46.889 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:46 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:12:46.656+0000 7feceb48c140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T09:12:46.889 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:46 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:12:46.699+0000 7feceb48c140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T09:12:46.889 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:46 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:12:46.778+0000 7feceb48c140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T09:12:46.889 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:46 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:12:46.815+0000 7feceb48c140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T09:12:47.156 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:46 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:12:46.894+0000 7feceb48c140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T09:12:47.156 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:47 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:12:47.016+0000 7feceb48c140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T09:12:47.156 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:47 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:12:47.159+0000 7feceb48c140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T09:12:47.459 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:47 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:12:47.196+0000 7feceb48c140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T09:12:47.889 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:47 vm09 ceph-mon[49644]: Activating manager daemon a 2026-03-10T09:12:47.889 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:47 vm09 ceph-mon[49644]: mgrmap e2: a(active, starting, since 0.00455265s) 2026-03-10T09:12:47.889 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:47 vm09 ceph-mon[49644]: from='mgr.14100 192.168.123.109:0/1394751038' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T09:12:47.889 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:47 vm09 ceph-mon[49644]: from='mgr.14100 192.168.123.109:0/1394751038' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T09:12:47.889 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:47 vm09 ceph-mon[49644]: from='mgr.14100 192.168.123.109:0/1394751038' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T09:12:47.889 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:47 vm09 ceph-mon[49644]: from='mgr.14100 192.168.123.109:0/1394751038' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T09:12:47.889 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:47 vm09 ceph-mon[49644]: from='mgr.14100 192.168.123.109:0/1394751038' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T09:12:47.889 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:47 vm09 ceph-mon[49644]: Manager daemon a is now available 2026-03-10T09:12:47.889 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:47 vm09 ceph-mon[49644]: from='mgr.14100 192.168.123.109:0/1394751038' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T09:12:47.889 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:47 vm09 ceph-mon[49644]: from='mgr.14100 192.168.123.109:0/1394751038' entity='mgr.a' 2026-03-10T09:12:47.889 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:47 vm09 ceph-mon[49644]: from='mgr.14100 192.168.123.109:0/1394751038' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T09:12:47.889 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:47 vm09 ceph-mon[49644]: from='mgr.14100 192.168.123.109:0/1394751038' entity='mgr.a' 2026-03-10T09:12:47.889 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:47 vm09 ceph-mon[49644]: from='mgr.14100 192.168.123.109:0/1394751038' entity='mgr.a' 2026-03-10T09:12:48.771 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout 2026-03-10T09:12:48.771 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout { 2026-03-10T09:12:48.771 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "fsid": "349a7c12-1c61-11f1-8c28-6d0db3d11b76", 2026-03-10T09:12:48.771 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T09:12:48.771 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T09:12:48.771 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T09:12:48.771 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T09:12:48.771 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:12:48.771 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T09:12:48.771 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T09:12:48.771 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout 0 2026-03-10T09:12:48.771 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout ], 2026-03-10T09:12:48.771 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T09:12:48.771 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "a" 2026-03-10T09:12:48.771 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout ], 2026-03-10T09:12:48.771 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "quorum_age": 5, 2026-03-10T09:12:48.771 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T09:12:48.771 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T09:12:48.771 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T09:12:48.771 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T09:12:48.771 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:12:48.771 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T09:12:48.771 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T09:12:48.771 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T09:12:48.771 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T09:12:48.771 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T09:12:48.771 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T09:12:48.771 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T09:12:48.771 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T09:12:48.771 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:12:48.771 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T09:12:48.771 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T09:12:48.771 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T09:12:48.771 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T09:12:48.771 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T09:12:48.771 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T09:12:48.771 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T09:12:48.771 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T09:12:48.771 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T09:12:48.771 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:12:48.771 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T09:12:48.771 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T09:12:48.771 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T09:12:42:416288+0000", 2026-03-10T09:12:48.771 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T09:12:48.771 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T09:12:48.771 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:12:48.772 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T09:12:48.772 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T09:12:48.772 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T09:12:48.772 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T09:12:48.772 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T09:12:48.772 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T09:12:48.772 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T09:12:48.772 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout ], 2026-03-10T09:12:48.772 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T09:12:48.772 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:12:48.772 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T09:12:48.772 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T09:12:48.772 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T09:12:42.416929+0000", 2026-03-10T09:12:48.772 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T09:12:48.772 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:12:48.772 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T09:12:48.772 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout } 2026-03-10T09:12:48.772 INFO:teuthology.orchestra.run.vm09.stdout:mgr is available 2026-03-10T09:12:49.045 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout 2026-03-10T09:12:49.045 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout [global] 2026-03-10T09:12:49.045 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout fsid = 349a7c12-1c61-11f1-8c28-6d0db3d11b76 2026-03-10T09:12:49.045 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-10T09:12:49.045 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.109:3300,v1:192.168.123.109:6789] 2026-03-10T09:12:49.045 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-10T09:12:49.045 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-10T09:12:49.045 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-10T09:12:49.045 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-10T09:12:49.045 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout 2026-03-10T09:12:49.045 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-10T09:12:49.045 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-10T09:12:49.045 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout 2026-03-10T09:12:49.045 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout [osd] 2026-03-10T09:12:49.045 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-10T09:12:49.045 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-10T09:12:49.045 INFO:teuthology.orchestra.run.vm09.stdout:Enabling cephadm module... 2026-03-10T09:12:49.389 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:49 vm09 ceph-mon[49644]: mgrmap e3: a(active, since 1.00892s) 2026-03-10T09:12:49.389 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:49 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/4251181520' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T09:12:49.389 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:49 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/1213335856' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T09:12:49.389 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:49 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/1213335856' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished 2026-03-10T09:12:50.359 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:50 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/3335889371' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-10T09:12:50.359 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:50 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/3335889371' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-10T09:12:50.359 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:50 vm09 ceph-mon[49644]: mgrmap e4: a(active, since 2s) 2026-03-10T09:12:50.359 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:50 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: ignoring --setuser ceph since I am not root 2026-03-10T09:12:50.359 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:50 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: ignoring --setgroup ceph since I am not root 2026-03-10T09:12:50.359 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:50 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:12:50.216+0000 7f3c16d72140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T09:12:50.359 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:50 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:12:50.265+0000 7f3c16d72140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T09:12:50.392 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout { 2026-03-10T09:12:50.393 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "epoch": 4, 2026-03-10T09:12:50.393 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T09:12:50.393 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "active_name": "a", 2026-03-10T09:12:50.393 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-10T09:12:50.393 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout } 2026-03-10T09:12:50.393 INFO:teuthology.orchestra.run.vm09.stdout:Waiting for the mgr to restart... 2026-03-10T09:12:50.393 INFO:teuthology.orchestra.run.vm09.stdout:Waiting for mgr epoch 4... 2026-03-10T09:12:51.079 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:50 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:12:50.738+0000 7f3c16d72140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T09:12:51.389 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:51 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/1281022317' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T09:12:51.389 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:51 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:12:51.081+0000 7f3c16d72140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T09:12:51.389 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:51 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T09:12:51.389 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:51 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T09:12:51.389 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:51 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: from numpy import show_config as show_numpy_config 2026-03-10T09:12:51.389 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:51 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:12:51.172+0000 7f3c16d72140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T09:12:51.389 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:51 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:12:51.209+0000 7f3c16d72140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T09:12:51.389 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:51 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:12:51.282+0000 7f3c16d72140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T09:12:52.074 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:51 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:12:51.795+0000 7f3c16d72140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T09:12:52.074 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:51 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:12:51.907+0000 7f3c16d72140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T09:12:52.074 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:51 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:12:51.951+0000 7f3c16d72140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T09:12:52.074 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:51 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:12:51.990+0000 7f3c16d72140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T09:12:52.074 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:52 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:12:52.036+0000 7f3c16d72140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T09:12:52.389 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:52 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:12:52.077+0000 7f3c16d72140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T09:12:52.389 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:52 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:12:52.253+0000 7f3c16d72140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T09:12:52.389 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:52 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:12:52.306+0000 7f3c16d72140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T09:12:52.828 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:52 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:12:52.543+0000 7f3c16d72140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T09:12:52.828 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:52 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:12:52.830+0000 7f3c16d72140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T09:12:53.103 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:52 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:12:52.868+0000 7f3c16d72140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T09:12:53.103 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:52 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:12:52.910+0000 7f3c16d72140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T09:12:53.103 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:52 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:12:52.988+0000 7f3c16d72140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T09:12:53.103 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:53 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:12:53.026+0000 7f3c16d72140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T09:12:53.380 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:53 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:12:53.106+0000 7f3c16d72140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T09:12:53.380 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:53 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:12:53.227+0000 7f3c16d72140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T09:12:53.639 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:53 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:12:53.372+0000 7f3c16d72140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T09:12:53.639 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:53 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:12:53.413+0000 7f3c16d72140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T09:12:54.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:53 vm09 ceph-mon[49644]: Active manager daemon a restarted 2026-03-10T09:12:54.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:53 vm09 ceph-mon[49644]: Activating manager daemon a 2026-03-10T09:12:54.864 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout { 2026-03-10T09:12:54.865 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 6, 2026-03-10T09:12:54.865 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-10T09:12:54.865 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout } 2026-03-10T09:12:54.865 INFO:teuthology.orchestra.run.vm09.stdout:mgr epoch 4 is available 2026-03-10T09:12:54.865 INFO:teuthology.orchestra.run.vm09.stdout:Setting orchestrator backend to cephadm... 2026-03-10T09:12:55.130 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:54 vm09 ceph-mon[49644]: osdmap e2: 0 total, 0 up, 0 in 2026-03-10T09:12:55.130 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:54 vm09 ceph-mon[49644]: mgrmap e5: a(active, starting, since 0.417004s) 2026-03-10T09:12:55.130 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:54 vm09 ceph-mon[49644]: from='mgr.14118 192.168.123.109:0/1519162063' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T09:12:55.130 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:54 vm09 ceph-mon[49644]: from='mgr.14118 192.168.123.109:0/1519162063' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T09:12:55.130 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:54 vm09 ceph-mon[49644]: from='mgr.14118 192.168.123.109:0/1519162063' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T09:12:55.130 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:54 vm09 ceph-mon[49644]: from='mgr.14118 192.168.123.109:0/1519162063' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T09:12:55.130 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:54 vm09 ceph-mon[49644]: from='mgr.14118 192.168.123.109:0/1519162063' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T09:12:55.130 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:54 vm09 ceph-mon[49644]: Manager daemon a is now available 2026-03-10T09:12:55.130 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:54 vm09 ceph-mon[49644]: Found migration_current of "None". Setting to last migration. 2026-03-10T09:12:55.130 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:54 vm09 ceph-mon[49644]: from='mgr.14118 192.168.123.109:0/1519162063' entity='mgr.a' 2026-03-10T09:12:55.130 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:54 vm09 ceph-mon[49644]: from='mgr.14118 192.168.123.109:0/1519162063' entity='mgr.a' 2026-03-10T09:12:55.130 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:54 vm09 ceph-mon[49644]: from='mgr.14118 192.168.123.109:0/1519162063' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:12:55.130 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:54 vm09 ceph-mon[49644]: from='mgr.14118 192.168.123.109:0/1519162063' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:12:55.130 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:54 vm09 ceph-mon[49644]: from='mgr.14118 192.168.123.109:0/1519162063' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T09:12:55.130 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:54 vm09 ceph-mon[49644]: from='mgr.14118 192.168.123.109:0/1519162063' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T09:12:55.130 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:54 vm09 ceph-mon[49644]: from='mgr.14118 192.168.123.109:0/1519162063' entity='mgr.a' 2026-03-10T09:12:55.130 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:54 vm09 ceph-mon[49644]: from='mgr.14118 192.168.123.109:0/1519162063' entity='mgr.a' 2026-03-10T09:12:55.130 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:54 vm09 ceph-mon[49644]: mgrmap e6: a(active, since 1.39833s) 2026-03-10T09:12:55.429 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout value unchanged 2026-03-10T09:12:55.429 INFO:teuthology.orchestra.run.vm09.stdout:Generating ssh key... 2026-03-10T09:12:55.907 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:55 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: Generating public/private ed25519 key pair. 2026-03-10T09:12:55.908 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:55 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: Your identification has been saved in /tmp/tmpbfjrtc1v/key 2026-03-10T09:12:55.908 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:55 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: Your public key has been saved in /tmp/tmpbfjrtc1v/key.pub 2026-03-10T09:12:55.908 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:55 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: The key fingerprint is: 2026-03-10T09:12:55.908 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:55 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: SHA256:wMFaET+jxWpq+DKFOvzbuwmUngM+TNYDkTbJdfnoovc ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76 2026-03-10T09:12:55.908 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:55 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: The key's randomart image is: 2026-03-10T09:12:55.908 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:55 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: +--[ED25519 256]--+ 2026-03-10T09:12:55.908 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:55 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: |..+. o=o | 2026-03-10T09:12:55.908 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:55 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: | *. .oo+ | 2026-03-10T09:12:55.908 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:55 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: |... o= * | 2026-03-10T09:12:55.908 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:55 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: | o o. * o | 2026-03-10T09:12:55.908 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:55 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: | + *. + S | 2026-03-10T09:12:55.908 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:55 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: |= =o++ | 2026-03-10T09:12:55.908 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:55 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: |.=o*+ | 2026-03-10T09:12:55.908 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:55 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: |oo+++ . | 2026-03-10T09:12:55.908 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:55 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: | .o=+Eo | 2026-03-10T09:12:55.908 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:12:55 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: +----[SHA256]-----+ 2026-03-10T09:12:55.932 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAwIGQEWlwjLwslhAzpClJkEeBfqGIiqgdF5KJVXenaE ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76 2026-03-10T09:12:55.932 INFO:teuthology.orchestra.run.vm09.stdout:Wrote public SSH key to /home/ubuntu/cephtest/ceph.pub 2026-03-10T09:12:55.932 INFO:teuthology.orchestra.run.vm09.stdout:Adding key to root@localhost authorized_keys... 2026-03-10T09:12:55.933 INFO:teuthology.orchestra.run.vm09.stdout:Adding host vm09... 2026-03-10T09:12:56.165 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:56 vm09 ceph-mon[49644]: from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T09:12:56.165 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:56 vm09 ceph-mon[49644]: from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T09:12:56.165 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:56 vm09 ceph-mon[49644]: [10/Mar/2026:09:12:54] ENGINE Bus STARTING 2026-03-10T09:12:56.165 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:56 vm09 ceph-mon[49644]: [10/Mar/2026:09:12:55] ENGINE Serving on https://192.168.123.109:7150 2026-03-10T09:12:56.165 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:56 vm09 ceph-mon[49644]: [10/Mar/2026:09:12:55] ENGINE Client ('192.168.123.109', 39540) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T09:12:56.165 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:56 vm09 ceph-mon[49644]: from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:12:56.165 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:56 vm09 ceph-mon[49644]: from='mgr.14118 192.168.123.109:0/1519162063' entity='mgr.a' 2026-03-10T09:12:56.165 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:56 vm09 ceph-mon[49644]: from='mgr.14118 192.168.123.109:0/1519162063' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:12:56.165 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:56 vm09 ceph-mon[49644]: from='mgr.14118 192.168.123.109:0/1519162063' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:12:56.165 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:56 vm09 ceph-mon[49644]: from='mgr.14118 192.168.123.109:0/1519162063' entity='mgr.a' 2026-03-10T09:12:56.166 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:56 vm09 ceph-mon[49644]: from='mgr.14118 192.168.123.109:0/1519162063' entity='mgr.a' 2026-03-10T09:12:57.389 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:57 vm09 ceph-mon[49644]: [10/Mar/2026:09:12:55] ENGINE Serving on http://192.168.123.109:8765 2026-03-10T09:12:57.389 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:57 vm09 ceph-mon[49644]: [10/Mar/2026:09:12:55] ENGINE Bus STARTED 2026-03-10T09:12:57.389 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:57 vm09 ceph-mon[49644]: from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:12:57.389 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:57 vm09 ceph-mon[49644]: from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:12:57.389 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:57 vm09 ceph-mon[49644]: Generating ssh key... 2026-03-10T09:12:57.389 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:57 vm09 ceph-mon[49644]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:12:57.389 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:57 vm09 ceph-mon[49644]: mgrmap e7: a(active, since 2s) 2026-03-10T09:12:57.847 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout Added host 'vm09' with addr '192.168.123.109' 2026-03-10T09:12:57.847 INFO:teuthology.orchestra.run.vm09.stdout:Deploying unmanaged mon service... 2026-03-10T09:12:58.839 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:58 vm09 ceph-mon[49644]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm09", "addr": "192.168.123.109", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:12:58.839 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:58 vm09 ceph-mon[49644]: Deploying cephadm binary to vm09 2026-03-10T09:12:58.839 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:58 vm09 ceph-mon[49644]: from='mgr.14118 192.168.123.109:0/1519162063' entity='mgr.a' 2026-03-10T09:12:58.839 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:58 vm09 ceph-mon[49644]: Added host vm09 2026-03-10T09:12:58.839 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:58 vm09 ceph-mon[49644]: from='mgr.14118 192.168.123.109:0/1519162063' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:12:58.901 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout Scheduled mon update... 2026-03-10T09:12:58.901 INFO:teuthology.orchestra.run.vm09.stdout:Deploying unmanaged mgr service... 2026-03-10T09:12:59.227 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout Scheduled mgr update... 2026-03-10T09:12:59.845 INFO:teuthology.orchestra.run.vm09.stdout:Enabling the dashboard module... 2026-03-10T09:13:00.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:59 vm09 ceph-mon[49644]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:13:00.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:59 vm09 ceph-mon[49644]: Saving service mon spec with placement count:5 2026-03-10T09:13:00.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:59 vm09 ceph-mon[49644]: from='mgr.14118 192.168.123.109:0/1519162063' entity='mgr.a' 2026-03-10T09:13:00.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:59 vm09 ceph-mon[49644]: from='mgr.14118 192.168.123.109:0/1519162063' entity='mgr.a' 2026-03-10T09:13:00.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:59 vm09 ceph-mon[49644]: from='mgr.14118 192.168.123.109:0/1519162063' entity='mgr.a' 2026-03-10T09:13:00.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:59 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/2866483384' entity='client.admin' 2026-03-10T09:13:00.140 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:59 vm09 ceph-mon[49644]: from='mgr.14118 192.168.123.109:0/1519162063' entity='mgr.a' 2026-03-10T09:13:00.140 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:12:59 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/1794283726' entity='client.admin' 2026-03-10T09:13:01.057 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:01 vm09 ceph-mon[49644]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:13:01.057 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:01 vm09 ceph-mon[49644]: Saving service mgr spec with placement count:2 2026-03-10T09:13:01.389 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:01 vm09 ceph-mon[49644]: from='mgr.14118 192.168.123.109:0/1519162063' entity='mgr.a' 2026-03-10T09:13:01.389 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:01 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/4192856989' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-10T09:13:01.389 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:01 vm09 ceph-mon[49644]: from='mgr.14118 192.168.123.109:0/1519162063' entity='mgr.a' 2026-03-10T09:13:01.390 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:13:01 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: ignoring --setuser ceph since I am not root 2026-03-10T09:13:01.390 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:13:01 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: ignoring --setgroup ceph since I am not root 2026-03-10T09:13:01.390 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:13:01 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:13:01.248+0000 7f2a6fe95140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T09:13:01.390 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:13:01 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:13:01.304+0000 7f2a6fe95140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T09:13:01.443 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout { 2026-03-10T09:13:01.443 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "epoch": 8, 2026-03-10T09:13:01.443 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T09:13:01.443 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "active_name": "a", 2026-03-10T09:13:01.443 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-10T09:13:01.443 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout } 2026-03-10T09:13:01.443 INFO:teuthology.orchestra.run.vm09.stdout:Waiting for the mgr to restart... 2026-03-10T09:13:01.444 INFO:teuthology.orchestra.run.vm09.stdout:Waiting for mgr epoch 8... 2026-03-10T09:13:02.064 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:13:01 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:13:01.764+0000 7f2a6fe95140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T09:13:02.327 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:02 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/4192856989' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-10T09:13:02.327 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:02 vm09 ceph-mon[49644]: mgrmap e8: a(active, since 7s) 2026-03-10T09:13:02.327 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:02 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/2806293015' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T09:13:02.328 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:13:02 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:13:02.116+0000 7f2a6fe95140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T09:13:02.328 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:13:02 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T09:13:02.328 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:13:02 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T09:13:02.328 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:13:02 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: from numpy import show_config as show_numpy_config 2026-03-10T09:13:02.328 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:13:02 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:13:02.211+0000 7f2a6fe95140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T09:13:02.328 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:13:02 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:13:02.252+0000 7f2a6fe95140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T09:13:02.328 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:13:02 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:13:02.329+0000 7f2a6fe95140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T09:13:03.124 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:13:02 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:13:02.874+0000 7f2a6fe95140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T09:13:03.124 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:13:02 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:13:02.996+0000 7f2a6fe95140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T09:13:03.124 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:13:03 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:13:03.040+0000 7f2a6fe95140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T09:13:03.124 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:13:03 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:13:03.081+0000 7f2a6fe95140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T09:13:03.124 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:13:03 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:13:03.126+0000 7f2a6fe95140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T09:13:03.389 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:13:03 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:13:03.168+0000 7f2a6fe95140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T09:13:03.389 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:13:03 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:13:03.365+0000 7f2a6fe95140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T09:13:03.889 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:13:03 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:13:03.428+0000 7f2a6fe95140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T09:13:03.889 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:13:03 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:13:03.676+0000 7f2a6fe95140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T09:13:04.298 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:13:03 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:13:03.993+0000 7f2a6fe95140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T09:13:04.298 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:13:04 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:13:04.037+0000 7f2a6fe95140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T09:13:04.298 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:13:04 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:13:04.085+0000 7f2a6fe95140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T09:13:04.298 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:13:04 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:13:04.175+0000 7f2a6fe95140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T09:13:04.298 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:13:04 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:13:04.216+0000 7f2a6fe95140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T09:13:04.573 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:13:04 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:13:04.300+0000 7f2a6fe95140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T09:13:04.573 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:13:04 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:13:04.427+0000 7f2a6fe95140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T09:13:04.889 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:04 vm09 ceph-mon[49644]: Active manager daemon a restarted 2026-03-10T09:13:04.889 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:04 vm09 ceph-mon[49644]: Activating manager daemon a 2026-03-10T09:13:04.889 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:04 vm09 ceph-mon[49644]: osdmap e3: 0 total, 0 up, 0 in 2026-03-10T09:13:04.889 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:04 vm09 ceph-mon[49644]: mgrmap e9: a(active, starting, since 0.00672114s) 2026-03-10T09:13:04.889 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:04 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T09:13:04.889 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:04 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T09:13:04.889 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:04 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T09:13:04.889 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:04 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T09:13:04.889 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:04 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T09:13:04.889 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:04 vm09 ceph-mon[49644]: Manager daemon a is now available 2026-03-10T09:13:04.889 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:04 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:13:04.889 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:13:04 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:13:04.575+0000 7f2a6fe95140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T09:13:04.889 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:13:04 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[49853]: 2026-03-10T09:13:04.618+0000 7f2a6fe95140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T09:13:05.671 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout { 2026-03-10T09:13:05.671 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 10, 2026-03-10T09:13:05.671 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-10T09:13:05.671 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout } 2026-03-10T09:13:05.671 INFO:teuthology.orchestra.run.vm09.stdout:mgr epoch 8 is available 2026-03-10T09:13:05.671 INFO:teuthology.orchestra.run.vm09.stdout:Generating a dashboard self-signed certificate... 2026-03-10T09:13:05.890 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:05 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T09:13:05.890 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:05 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T09:13:05.890 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:05 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:05.890 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:05 vm09 ceph-mon[49644]: mgrmap e10: a(active, since 1.0124s) 2026-03-10T09:13:05.890 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:05 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:06.268 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout Self-signed certificate created 2026-03-10T09:13:06.268 INFO:teuthology.orchestra.run.vm09.stdout:Creating initial admin user... 2026-03-10T09:13:06.727 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout {"username": "admin", "password": "$2b$12$biO2Zs8jhLaiieeEwKIzBOT7PqskW2RBFAry/LS7dU1S4kVLeTDUW", "roles": ["administrator"], "name": null, "email": null, "lastUpdate": 1773133986, "enabled": true, "pwdExpirationDate": null, "pwdUpdateRequired": true} 2026-03-10T09:13:06.727 INFO:teuthology.orchestra.run.vm09.stdout:Fetching dashboard port number... 2026-03-10T09:13:06.979 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stdout 8443 2026-03-10T09:13:06.980 INFO:teuthology.orchestra.run.vm09.stdout:firewalld does not appear to be present 2026-03-10T09:13:06.980 INFO:teuthology.orchestra.run.vm09.stdout:Not possible to open ports <[8443]>. firewalld.service is not available 2026-03-10T09:13:06.981 INFO:teuthology.orchestra.run.vm09.stdout:Ceph Dashboard is now available at: 2026-03-10T09:13:06.981 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:13:06.981 INFO:teuthology.orchestra.run.vm09.stdout: URL: https://vm09.local:8443/ 2026-03-10T09:13:06.981 INFO:teuthology.orchestra.run.vm09.stdout: User: admin 2026-03-10T09:13:06.981 INFO:teuthology.orchestra.run.vm09.stdout: Password: yd348nno16 2026-03-10T09:13:06.981 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:13:06.981 INFO:teuthology.orchestra.run.vm09.stdout:Saving cluster configuration to /var/lib/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/config directory 2026-03-10T09:13:07.267 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:07 vm09 ceph-mon[49644]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T09:13:07.267 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:07 vm09 ceph-mon[49644]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T09:13:07.267 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:07 vm09 ceph-mon[49644]: from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:13:07.267 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:07 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:07.267 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:07 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:07.267 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:07 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:07.267 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:07 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/307742184' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-10T09:13:07.321 INFO:teuthology.orchestra.run.vm09.stdout:/usr/bin/ceph: stderr set mgr/dashboard/cluster/status 2026-03-10T09:13:07.321 INFO:teuthology.orchestra.run.vm09.stdout:You can access the Ceph CLI as following in case of multi-cluster or non-default config: 2026-03-10T09:13:07.321 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:13:07.321 INFO:teuthology.orchestra.run.vm09.stdout: sudo /home/ubuntu/cephtest/cephadm shell --fsid 349a7c12-1c61-11f1-8c28-6d0db3d11b76 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring 2026-03-10T09:13:07.321 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:13:07.321 INFO:teuthology.orchestra.run.vm09.stdout:Or, if you are only running a single cluster on this host: 2026-03-10T09:13:07.321 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:13:07.321 INFO:teuthology.orchestra.run.vm09.stdout: sudo /home/ubuntu/cephtest/cephadm shell 2026-03-10T09:13:07.321 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:13:07.321 INFO:teuthology.orchestra.run.vm09.stdout:Please consider enabling telemetry to help improve Ceph: 2026-03-10T09:13:07.321 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:13:07.321 INFO:teuthology.orchestra.run.vm09.stdout: ceph telemetry on 2026-03-10T09:13:07.321 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:13:07.321 INFO:teuthology.orchestra.run.vm09.stdout:For more information see: 2026-03-10T09:13:07.321 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:13:07.321 INFO:teuthology.orchestra.run.vm09.stdout: https://docs.ceph.com/en/latest/mgr/telemetry/ 2026-03-10T09:13:07.321 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:13:07.321 INFO:teuthology.orchestra.run.vm09.stdout:Bootstrap complete. 2026-03-10T09:13:07.355 INFO:tasks.cephadm:Fetching config... 2026-03-10T09:13:07.355 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T09:13:07.355 DEBUG:teuthology.orchestra.run.vm09:> dd if=/etc/ceph/ceph.conf of=/dev/stdout 2026-03-10T09:13:07.374 INFO:tasks.cephadm:Fetching client.admin keyring... 2026-03-10T09:13:07.374 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T09:13:07.374 DEBUG:teuthology.orchestra.run.vm09:> dd if=/etc/ceph/ceph.client.admin.keyring of=/dev/stdout 2026-03-10T09:13:07.457 INFO:tasks.cephadm:Fetching mon keyring... 2026-03-10T09:13:07.457 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T09:13:07.457 DEBUG:teuthology.orchestra.run.vm09:> sudo dd if=/var/lib/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/mon.a/keyring of=/dev/stdout 2026-03-10T09:13:07.525 INFO:tasks.cephadm:Fetching pub ssh key... 2026-03-10T09:13:07.525 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T09:13:07.525 DEBUG:teuthology.orchestra.run.vm09:> dd if=/home/ubuntu/cephtest/ceph.pub of=/dev/stdout 2026-03-10T09:13:07.581 INFO:tasks.cephadm:Installing pub ssh key for root users... 2026-03-10T09:13:07.581 DEBUG:teuthology.orchestra.run.vm09:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAwIGQEWlwjLwslhAzpClJkEeBfqGIiqgdF5KJVXenaE ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-10T09:13:07.687 INFO:teuthology.orchestra.run.vm09.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAwIGQEWlwjLwslhAzpClJkEeBfqGIiqgdF5KJVXenaE ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76 2026-03-10T09:13:07.702 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 349a7c12-1c61-11f1-8c28-6d0db3d11b76 -- ceph config set mgr mgr/cephadm/allow_ptrace true 2026-03-10T09:13:07.904 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/mon.a/config 2026-03-10T09:13:08.189 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:08 vm09 ceph-mon[49644]: [10/Mar/2026:09:13:06] ENGINE Bus STARTING 2026-03-10T09:13:08.189 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:08 vm09 ceph-mon[49644]: [10/Mar/2026:09:13:06] ENGINE Serving on http://192.168.123.109:8765 2026-03-10T09:13:08.189 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:08 vm09 ceph-mon[49644]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:13:08.189 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:08 vm09 ceph-mon[49644]: [10/Mar/2026:09:13:06] ENGINE Serving on https://192.168.123.109:7150 2026-03-10T09:13:08.189 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:08 vm09 ceph-mon[49644]: [10/Mar/2026:09:13:06] ENGINE Bus STARTED 2026-03-10T09:13:08.190 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:08 vm09 ceph-mon[49644]: [10/Mar/2026:09:13:06] ENGINE Client ('192.168.123.109', 55222) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T09:13:08.190 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:08 vm09 ceph-mon[49644]: mgrmap e11: a(active, since 2s) 2026-03-10T09:13:08.190 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:08 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/2732251030' entity='client.admin' 2026-03-10T09:13:08.228 INFO:tasks.cephadm:Distributing conf and client.admin keyring to all hosts + 0755 2026-03-10T09:13:08.228 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 349a7c12-1c61-11f1-8c28-6d0db3d11b76 -- ceph orch client-keyring set client.admin '*' --mode 0755 2026-03-10T09:13:08.473 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/mon.a/config 2026-03-10T09:13:08.861 INFO:tasks.cephadm:Setting crush tunables to default 2026-03-10T09:13:08.862 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 349a7c12-1c61-11f1-8c28-6d0db3d11b76 -- ceph osd crush tunables default 2026-03-10T09:13:09.065 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/mon.a/config 2026-03-10T09:13:09.389 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:09 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/1655000484' entity='client.admin' 2026-03-10T09:13:09.389 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:09 vm09 ceph-mon[49644]: from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:13:09.389 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:09 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:10.189 INFO:teuthology.orchestra.run.vm09.stderr:adjusted tunables profile to default 2026-03-10T09:13:10.246 INFO:tasks.cephadm:Adding mon.a on vm09 2026-03-10T09:13:10.246 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 349a7c12-1c61-11f1-8c28-6d0db3d11b76 -- ceph orch apply mon '1;vm09:192.168.123.109=a' 2026-03-10T09:13:10.452 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/mon.a/config 2026-03-10T09:13:10.487 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:10 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/1572481471' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-10T09:13:10.487 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:10 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:10.487 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:10 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:10.487 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:10 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-10T09:13:10.487 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:10 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:13:10.487 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:10 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:13:10.488 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:10 vm09 ceph-mon[49644]: Updating vm09:/etc/ceph/ceph.conf 2026-03-10T09:13:10.488 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:10 vm09 ceph-mon[49644]: Updating vm09:/var/lib/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/config/ceph.conf 2026-03-10T09:13:10.488 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:10 vm09 ceph-mon[49644]: Updating vm09:/etc/ceph/ceph.client.admin.keyring 2026-03-10T09:13:10.720 INFO:teuthology.orchestra.run.vm09.stdout:Scheduled mon update... 2026-03-10T09:13:10.805 INFO:tasks.cephadm:Waiting for 1 mons in monmap... 2026-03-10T09:13:10.805 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 349a7c12-1c61-11f1-8c28-6d0db3d11b76 -- ceph mon dump -f json 2026-03-10T09:13:11.038 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/mon.a/config 2026-03-10T09:13:11.326 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:11 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/1572481471' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-10T09:13:11.326 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:11 vm09 ceph-mon[49644]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T09:13:11.326 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:11 vm09 ceph-mon[49644]: Updating vm09:/var/lib/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/config/ceph.client.admin.keyring 2026-03-10T09:13:11.326 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:11 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:11.327 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:11 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:11.327 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:11 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:11.327 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:11 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:13:11.327 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:11 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:13:11.327 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:11 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:13:11.327 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:11 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:11.327 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:11 vm09 ceph-mon[49644]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "1;vm09:192.168.123.109=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:13:11.327 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:11 vm09 ceph-mon[49644]: Saving service mon spec with placement vm09:192.168.123.109=a;count:1 2026-03-10T09:13:11.327 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:11 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:11.327 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:11 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:13:11.327 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:11 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:13:11.327 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:11 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:13:11.327 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:11 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:11.327 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:11 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:11.327 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:11 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:11.327 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:11 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:11.327 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:11 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:11.327 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:11 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:11.327 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:11 vm09 ceph-mon[49644]: Reconfiguring mon.a (unknown last config time)... 2026-03-10T09:13:11.327 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:11 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T09:13:11.327 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:11 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T09:13:11.327 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:11 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:13:11.327 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:11 vm09 ceph-mon[49644]: Reconfiguring daemon mon.a on vm09 2026-03-10T09:13:11.327 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:11 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:11.327 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:11 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:11.327 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:13:11.328 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"349a7c12-1c61-11f1-8c28-6d0db3d11b76","modified":"2026-03-10T09:12:41.364721Z","created":"2026-03-10T09:12:41.364721Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:3300","nonce":0},{"type":"v1","addr":"192.168.123.109:6789","nonce":0}]},"addr":"192.168.123.109:6789/0","public_addr":"192.168.123.109:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T09:13:11.328 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-10T09:13:11.402 INFO:tasks.cephadm:Generating final ceph.conf file... 2026-03-10T09:13:11.402 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 349a7c12-1c61-11f1-8c28-6d0db3d11b76 -- ceph config generate-minimal-conf 2026-03-10T09:13:11.577 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/mon.a/config 2026-03-10T09:13:11.796 INFO:teuthology.orchestra.run.vm09.stdout:# minimal ceph.conf for 349a7c12-1c61-11f1-8c28-6d0db3d11b76 2026-03-10T09:13:11.797 INFO:teuthology.orchestra.run.vm09.stdout:[global] 2026-03-10T09:13:11.797 INFO:teuthology.orchestra.run.vm09.stdout: fsid = 349a7c12-1c61-11f1-8c28-6d0db3d11b76 2026-03-10T09:13:11.797 INFO:teuthology.orchestra.run.vm09.stdout: mon_host = [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] 2026-03-10T09:13:11.889 INFO:tasks.cephadm:Distributing (final) config and client.admin keyring... 2026-03-10T09:13:11.889 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T09:13:11.889 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/ceph/ceph.conf 2026-03-10T09:13:11.917 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T09:13:11.917 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T09:13:11.984 INFO:tasks.cephadm:Adding mgr.a on vm09 2026-03-10T09:13:11.984 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 349a7c12-1c61-11f1-8c28-6d0db3d11b76 -- ceph orch apply mgr '1;vm09=a' 2026-03-10T09:13:12.195 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/mon.a/config 2026-03-10T09:13:12.420 INFO:teuthology.orchestra.run.vm09.stdout:Scheduled mgr update... 2026-03-10T09:13:12.442 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:12 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/401836786' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T09:13:12.442 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:12 vm09 ceph-mon[49644]: mgrmap e12: a(active, since 6s) 2026-03-10T09:13:12.442 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:12 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/1408387692' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:13:12.495 INFO:tasks.cephadm:Deploying OSDs... 2026-03-10T09:13:12.495 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T09:13:12.495 DEBUG:teuthology.orchestra.run.vm09:> dd if=/scratch_devs of=/dev/stdout 2026-03-10T09:13:12.514 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T09:13:12.514 DEBUG:teuthology.orchestra.run.vm09:> ls /dev/[sv]d? 2026-03-10T09:13:12.571 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vda 2026-03-10T09:13:12.572 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vdb 2026-03-10T09:13:12.572 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vdc 2026-03-10T09:13:12.572 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vdd 2026-03-10T09:13:12.572 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vde 2026-03-10T09:13:12.572 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-10T09:13:12.572 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-10T09:13:12.572 DEBUG:teuthology.orchestra.run.vm09:> stat /dev/vdb 2026-03-10T09:13:12.636 INFO:teuthology.orchestra.run.vm09.stdout: File: /dev/vdb 2026-03-10T09:13:12.637 INFO:teuthology.orchestra.run.vm09.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T09:13:12.637 INFO:teuthology.orchestra.run.vm09.stdout:Device: 6h/6d Inode: 254 Links: 1 Device type: fc,10 2026-03-10T09:13:12.637 INFO:teuthology.orchestra.run.vm09.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T09:13:12.637 INFO:teuthology.orchestra.run.vm09.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T09:13:12.637 INFO:teuthology.orchestra.run.vm09.stdout:Access: 2026-03-10 09:13:09.528411651 +0000 2026-03-10T09:13:12.637 INFO:teuthology.orchestra.run.vm09.stdout:Modify: 2026-03-10 09:08:51.062000000 +0000 2026-03-10T09:13:12.637 INFO:teuthology.orchestra.run.vm09.stdout:Change: 2026-03-10 09:08:51.062000000 +0000 2026-03-10T09:13:12.637 INFO:teuthology.orchestra.run.vm09.stdout: Birth: 2026-03-10 09:08:49.235000000 +0000 2026-03-10T09:13:12.637 DEBUG:teuthology.orchestra.run.vm09:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-10T09:13:12.731 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records in 2026-03-10T09:13:12.731 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records out 2026-03-10T09:13:12.731 INFO:teuthology.orchestra.run.vm09.stderr:512 bytes copied, 0.000149249 s, 3.4 MB/s 2026-03-10T09:13:12.732 DEBUG:teuthology.orchestra.run.vm09:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-10T09:13:12.755 DEBUG:teuthology.orchestra.run.vm09:> stat /dev/vdc 2026-03-10T09:13:12.812 INFO:teuthology.orchestra.run.vm09.stdout: File: /dev/vdc 2026-03-10T09:13:12.812 INFO:teuthology.orchestra.run.vm09.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T09:13:12.812 INFO:teuthology.orchestra.run.vm09.stdout:Device: 6h/6d Inode: 255 Links: 1 Device type: fc,20 2026-03-10T09:13:12.812 INFO:teuthology.orchestra.run.vm09.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T09:13:12.812 INFO:teuthology.orchestra.run.vm09.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T09:13:12.813 INFO:teuthology.orchestra.run.vm09.stdout:Access: 2026-03-10 09:13:09.573411657 +0000 2026-03-10T09:13:12.813 INFO:teuthology.orchestra.run.vm09.stdout:Modify: 2026-03-10 09:08:51.055000000 +0000 2026-03-10T09:13:12.813 INFO:teuthology.orchestra.run.vm09.stdout:Change: 2026-03-10 09:08:51.055000000 +0000 2026-03-10T09:13:12.813 INFO:teuthology.orchestra.run.vm09.stdout: Birth: 2026-03-10 09:08:49.238000000 +0000 2026-03-10T09:13:12.813 DEBUG:teuthology.orchestra.run.vm09:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-10T09:13:12.876 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records in 2026-03-10T09:13:12.876 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records out 2026-03-10T09:13:12.876 INFO:teuthology.orchestra.run.vm09.stderr:512 bytes copied, 0.000126387 s, 4.1 MB/s 2026-03-10T09:13:12.877 DEBUG:teuthology.orchestra.run.vm09:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-10T09:13:12.934 DEBUG:teuthology.orchestra.run.vm09:> stat /dev/vdd 2026-03-10T09:13:12.990 INFO:teuthology.orchestra.run.vm09.stdout: File: /dev/vdd 2026-03-10T09:13:12.990 INFO:teuthology.orchestra.run.vm09.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T09:13:12.991 INFO:teuthology.orchestra.run.vm09.stdout:Device: 6h/6d Inode: 256 Links: 1 Device type: fc,30 2026-03-10T09:13:12.991 INFO:teuthology.orchestra.run.vm09.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T09:13:12.991 INFO:teuthology.orchestra.run.vm09.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T09:13:12.991 INFO:teuthology.orchestra.run.vm09.stdout:Access: 2026-03-10 09:13:09.608411662 +0000 2026-03-10T09:13:12.991 INFO:teuthology.orchestra.run.vm09.stdout:Modify: 2026-03-10 09:08:51.056000000 +0000 2026-03-10T09:13:12.991 INFO:teuthology.orchestra.run.vm09.stdout:Change: 2026-03-10 09:08:51.056000000 +0000 2026-03-10T09:13:12.991 INFO:teuthology.orchestra.run.vm09.stdout: Birth: 2026-03-10 09:08:49.257000000 +0000 2026-03-10T09:13:12.991 DEBUG:teuthology.orchestra.run.vm09:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-10T09:13:13.054 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records in 2026-03-10T09:13:13.054 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records out 2026-03-10T09:13:13.054 INFO:teuthology.orchestra.run.vm09.stderr:512 bytes copied, 0.000134772 s, 3.8 MB/s 2026-03-10T09:13:13.055 DEBUG:teuthology.orchestra.run.vm09:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-10T09:13:13.111 DEBUG:teuthology.orchestra.run.vm09:> stat /dev/vde 2026-03-10T09:13:13.167 INFO:teuthology.orchestra.run.vm09.stdout: File: /dev/vde 2026-03-10T09:13:13.167 INFO:teuthology.orchestra.run.vm09.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T09:13:13.167 INFO:teuthology.orchestra.run.vm09.stdout:Device: 6h/6d Inode: 257 Links: 1 Device type: fc,40 2026-03-10T09:13:13.167 INFO:teuthology.orchestra.run.vm09.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T09:13:13.167 INFO:teuthology.orchestra.run.vm09.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T09:13:13.167 INFO:teuthology.orchestra.run.vm09.stdout:Access: 2026-03-10 09:13:09.648411667 +0000 2026-03-10T09:13:13.167 INFO:teuthology.orchestra.run.vm09.stdout:Modify: 2026-03-10 09:08:51.056000000 +0000 2026-03-10T09:13:13.167 INFO:teuthology.orchestra.run.vm09.stdout:Change: 2026-03-10 09:08:51.056000000 +0000 2026-03-10T09:13:13.167 INFO:teuthology.orchestra.run.vm09.stdout: Birth: 2026-03-10 09:08:49.310000000 +0000 2026-03-10T09:13:13.168 DEBUG:teuthology.orchestra.run.vm09:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-10T09:13:13.230 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records in 2026-03-10T09:13:13.230 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records out 2026-03-10T09:13:13.230 INFO:teuthology.orchestra.run.vm09.stderr:512 bytes copied, 0.000184966 s, 2.8 MB/s 2026-03-10T09:13:13.231 DEBUG:teuthology.orchestra.run.vm09:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-10T09:13:13.288 INFO:tasks.cephadm:Deploying osd.0 on vm09 with /dev/vde... 2026-03-10T09:13:13.288 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 349a7c12-1c61-11f1-8c28-6d0db3d11b76 -- lvm zap /dev/vde 2026-03-10T09:13:13.494 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/mon.a/config 2026-03-10T09:13:13.611 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:13 vm09 ceph-mon[49644]: from='client.14182 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "1;vm09=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:13:13.611 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:13 vm09 ceph-mon[49644]: Saving service mgr spec with placement vm09=a;count:1 2026-03-10T09:13:13.611 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:13 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:13.611 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:13 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:13:13.611 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:13 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:13:13.611 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:13 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:13:13.611 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:13 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:13.611 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:13 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:13.611 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:13 vm09 ceph-mon[49644]: Reconfiguring mgr.a (unknown last config time)... 2026-03-10T09:13:13.611 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:13 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.a", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T09:13:13.611 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:13 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T09:13:13.611 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:13 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:13:13.611 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:13 vm09 ceph-mon[49644]: Reconfiguring daemon mgr.a on vm09 2026-03-10T09:13:13.611 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:13 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:13.611 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:13 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:14.399 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:13:14.420 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 349a7c12-1c61-11f1-8c28-6d0db3d11b76 -- ceph orch daemon add osd vm09:/dev/vde 2026-03-10T09:13:14.593 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/mon.a/config 2026-03-10T09:13:14.873 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:14 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T09:13:14.873 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:14 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T09:13:14.873 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:14 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:13:16.032 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:15 vm09 ceph-mon[49644]: from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:13:16.033 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:15 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/2637063685' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d0268d12-2d91-4c58-847f-4481a225bb98"}]: dispatch 2026-03-10T09:13:16.033 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:15 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/2637063685' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d0268d12-2d91-4c58-847f-4481a225bb98"}]': finished 2026-03-10T09:13:16.033 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:15 vm09 ceph-mon[49644]: osdmap e5: 1 total, 0 up, 1 in 2026-03-10T09:13:16.033 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:15 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:13:17.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:16 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/3876579535' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T09:13:19.969 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:19 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T09:13:19.969 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:19 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:13:21.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:20 vm09 ceph-mon[49644]: Deploying daemon osd.0 on vm09 2026-03-10T09:13:22.389 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:22 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:13:23.398 INFO:teuthology.orchestra.run.vm09.stdout:Created osd(s) 0 on host 'vm09' 2026-03-10T09:13:23.399 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:23 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:23.399 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:23 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:23.399 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:23 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:23.399 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:23 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:23.399 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:23 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:13:23.399 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:23 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:13:23.399 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:23 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:23.467 DEBUG:teuthology.orchestra.run.vm09:osd.0> sudo journalctl -f -n 0 -u ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@osd.0.service 2026-03-10T09:13:23.468 INFO:tasks.cephadm:Deploying osd.1 on vm09 with /dev/vdd... 2026-03-10T09:13:23.469 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 349a7c12-1c61-11f1-8c28-6d0db3d11b76 -- lvm zap /dev/vdd 2026-03-10T09:13:23.679 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:13:23 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-0[60829]: 2026-03-10T09:13:23.601+0000 7f1701158740 -1 osd.0 0 log_to_monitors true 2026-03-10T09:13:23.784 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/mon.a/config 2026-03-10T09:13:24.393 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:24 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:13:24.393 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:24 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:24.393 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:24 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:24.393 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:24 vm09 ceph-mon[49644]: from='osd.0 [v2:192.168.123.109:6802/2648938696,v1:192.168.123.109:6803/2648938696]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T09:13:25.138 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:13:25.154 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 349a7c12-1c61-11f1-8c28-6d0db3d11b76 -- ceph orch daemon add osd vm09:/dev/vdd 2026-03-10T09:13:25.342 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/mon.a/config 2026-03-10T09:13:25.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:25 vm09 ceph-mon[49644]: from='osd.0 [v2:192.168.123.109:6802/2648938696,v1:192.168.123.109:6803/2648938696]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T09:13:25.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:25 vm09 ceph-mon[49644]: osdmap e6: 1 total, 0 up, 1 in 2026-03-10T09:13:25.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:25 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:13:25.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:25 vm09 ceph-mon[49644]: from='osd.0 [v2:192.168.123.109:6802/2648938696,v1:192.168.123.109:6803/2648938696]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T09:13:25.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:25 vm09 ceph-mon[49644]: Detected new or changed devices on vm09 2026-03-10T09:13:25.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:25 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:25.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:25 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:25.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:25 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-10T09:13:25.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:25 vm09 ceph-mon[49644]: Adjusting osd_memory_target on vm09 to 257.0M 2026-03-10T09:13:25.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:25 vm09 ceph-mon[49644]: Unable to set osd_memory_target on vm09 to 269530726: error parsing value: Value '269530726' is below minimum 939524096 2026-03-10T09:13:25.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:25 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:13:25.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:25 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:13:25.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:25 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:25.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:25 vm09 ceph-mon[49644]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:13:26.512 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:26 vm09 ceph-mon[49644]: from='osd.0 [v2:192.168.123.109:6802/2648938696,v1:192.168.123.109:6803/2648938696]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T09:13:26.512 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:26 vm09 ceph-mon[49644]: osdmap e7: 1 total, 0 up, 1 in 2026-03-10T09:13:26.512 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:26 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:13:26.512 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:26 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:13:26.512 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:26 vm09 ceph-mon[49644]: from='client.14193 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:13:26.512 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:26 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T09:13:26.512 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:26 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T09:13:26.512 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:26 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:13:26.512 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:26 vm09 ceph-mon[49644]: from='osd.0 [v2:192.168.123.109:6802/2648938696,v1:192.168.123.109:6803/2648938696]' entity='osd.0' 2026-03-10T09:13:26.512 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:26 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:13:26.512 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:13:26 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-0[60829]: 2026-03-10T09:13:26.404+0000 7f16fd0d9640 -1 osd.0 0 waiting for initial osdmap 2026-03-10T09:13:26.512 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:13:26 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-0[60829]: 2026-03-10T09:13:26.417+0000 7f16f8702640 -1 osd.0 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T09:13:27.889 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:27 vm09 ceph-mon[49644]: purged_snaps scrub starts 2026-03-10T09:13:27.889 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:27 vm09 ceph-mon[49644]: purged_snaps scrub ok 2026-03-10T09:13:27.889 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:27 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/3660027688' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9a4cc04f-8019-4083-b136-60d601e0d497"}]: dispatch 2026-03-10T09:13:27.889 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:27 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/3660027688' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "9a4cc04f-8019-4083-b136-60d601e0d497"}]': finished 2026-03-10T09:13:27.889 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:27 vm09 ceph-mon[49644]: osd.0 [v2:192.168.123.109:6802/2648938696,v1:192.168.123.109:6803/2648938696] boot 2026-03-10T09:13:27.889 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:27 vm09 ceph-mon[49644]: osdmap e8: 2 total, 1 up, 2 in 2026-03-10T09:13:27.889 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:27 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:13:27.889 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:27 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:13:27.890 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:27 vm09 ceph-mon[49644]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:13:27.890 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:27 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/834981548' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T09:13:30.053 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:29 vm09 ceph-mon[49644]: pgmap v10: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T09:13:30.687 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:30 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T09:13:30.688 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:30 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:13:31.890 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:31 vm09 ceph-mon[49644]: Deploying daemon osd.1 on vm09 2026-03-10T09:13:31.890 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:31 vm09 ceph-mon[49644]: pgmap v11: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T09:13:32.741 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:32 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:13:32.741 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:32 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:32.741 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:32 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:33.863 INFO:teuthology.orchestra.run.vm09.stdout:Created osd(s) 1 on host 'vm09' 2026-03-10T09:13:33.926 DEBUG:teuthology.orchestra.run.vm09:osd.1> sudo journalctl -f -n 0 -u ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@osd.1.service 2026-03-10T09:13:33.928 INFO:tasks.cephadm:Deploying osd.2 on vm09 with /dev/vdc... 2026-03-10T09:13:33.928 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 349a7c12-1c61-11f1-8c28-6d0db3d11b76 -- lvm zap /dev/vdc 2026-03-10T09:13:34.004 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:33 vm09 ceph-mon[49644]: pgmap v12: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T09:13:34.004 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:33 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:34.004 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:33 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:34.004 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:33 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:13:34.004 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:33 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:13:34.004 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:33 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:34.180 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/mon.a/config 2026-03-10T09:13:34.255 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:13:34 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-1[65658]: 2026-03-10T09:13:34.127+0000 7f8b4fa7a740 -1 osd.1 0 log_to_monitors true 2026-03-10T09:13:35.014 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:34 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:13:35.014 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:34 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:35.014 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:34 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:35.014 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:34 vm09 ceph-mon[49644]: from='osd.1 [v2:192.168.123.109:6810/2434513999,v1:192.168.123.109:6811/2434513999]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T09:13:35.684 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:13:35.707 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 349a7c12-1c61-11f1-8c28-6d0db3d11b76 -- ceph orch daemon add osd vm09:/dev/vdc 2026-03-10T09:13:35.956 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/mon.a/config 2026-03-10T09:13:36.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:35 vm09 ceph-mon[49644]: pgmap v13: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T09:13:36.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:35 vm09 ceph-mon[49644]: from='osd.1 [v2:192.168.123.109:6810/2434513999,v1:192.168.123.109:6811/2434513999]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T09:13:36.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:35 vm09 ceph-mon[49644]: osdmap e9: 2 total, 1 up, 2 in 2026-03-10T09:13:36.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:35 vm09 ceph-mon[49644]: from='osd.1 [v2:192.168.123.109:6810/2434513999,v1:192.168.123.109:6811/2434513999]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T09:13:36.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:35 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:13:36.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:35 vm09 ceph-mon[49644]: Detected new or changed devices on vm09 2026-03-10T09:13:36.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:35 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:36.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:35 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:36.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:35 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-10T09:13:36.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:35 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-10T09:13:36.140 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:35 vm09 ceph-mon[49644]: Adjusting osd_memory_target on vm09 to 128.5M 2026-03-10T09:13:36.140 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:35 vm09 ceph-mon[49644]: Unable to set osd_memory_target on vm09 to 134765363: error parsing value: Value '134765363' is below minimum 939524096 2026-03-10T09:13:36.140 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:35 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:13:36.140 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:35 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:13:36.140 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:35 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:36.881 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:36 vm09 ceph-mon[49644]: from='osd.1 [v2:192.168.123.109:6810/2434513999,v1:192.168.123.109:6811/2434513999]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T09:13:36.881 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:36 vm09 ceph-mon[49644]: osdmap e10: 2 total, 1 up, 2 in 2026-03-10T09:13:36.881 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:36 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:13:36.881 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:36 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:13:36.881 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:36 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T09:13:36.881 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:36 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T09:13:36.881 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:36 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:13:37.388 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:13:37 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-1[65658]: 2026-03-10T09:13:37.132+0000 7f8b4b9fb640 -1 osd.1 0 waiting for initial osdmap 2026-03-10T09:13:37.388 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:13:37 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-1[65658]: 2026-03-10T09:13:37.139+0000 7f8b47024640 -1 osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T09:13:38.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:37 vm09 ceph-mon[49644]: purged_snaps scrub starts 2026-03-10T09:13:38.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:37 vm09 ceph-mon[49644]: purged_snaps scrub ok 2026-03-10T09:13:38.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:37 vm09 ceph-mon[49644]: from='client.14202 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:13:38.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:37 vm09 ceph-mon[49644]: pgmap v16: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T09:13:38.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:37 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:13:38.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:37 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/3424190518' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a2664302-47b2-48a9-ac35-65f3bc5a6c6e"}]: dispatch 2026-03-10T09:13:38.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:37 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/3424190518' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a2664302-47b2-48a9-ac35-65f3bc5a6c6e"}]': finished 2026-03-10T09:13:38.140 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:37 vm09 ceph-mon[49644]: osdmap e11: 3 total, 1 up, 3 in 2026-03-10T09:13:38.140 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:37 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:13:38.140 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:37 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:13:38.140 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:37 vm09 ceph-mon[49644]: from='osd.1 [v2:192.168.123.109:6810/2434513999,v1:192.168.123.109:6811/2434513999]' entity='osd.1' 2026-03-10T09:13:38.140 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:37 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/366475169' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T09:13:38.140 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:37 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:13:39.389 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:39 vm09 ceph-mon[49644]: osd.1 [v2:192.168.123.109:6810/2434513999,v1:192.168.123.109:6811/2434513999] boot 2026-03-10T09:13:39.389 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:39 vm09 ceph-mon[49644]: osdmap e12: 3 total, 2 up, 3 in 2026-03-10T09:13:39.389 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:39 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:13:39.389 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:39 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:13:40.375 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:40 vm09 ceph-mon[49644]: pgmap v19: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T09:13:42.241 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:42 vm09 ceph-mon[49644]: pgmap v20: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T09:13:42.241 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:42 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T09:13:42.241 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:42 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:13:42.241 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:42 vm09 ceph-mon[49644]: Deploying daemon osd.2 on vm09 2026-03-10T09:13:44.389 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:44 vm09 ceph-mon[49644]: pgmap v21: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T09:13:44.389 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:44 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:13:44.389 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:44 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:44.389 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:44 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:44.653 INFO:teuthology.orchestra.run.vm09.stdout:Created osd(s) 2 on host 'vm09' 2026-03-10T09:13:44.716 DEBUG:teuthology.orchestra.run.vm09:osd.2> sudo journalctl -f -n 0 -u ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@osd.2.service 2026-03-10T09:13:44.718 INFO:tasks.cephadm:Waiting for 3 OSDs to come up... 2026-03-10T09:13:44.718 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 349a7c12-1c61-11f1-8c28-6d0db3d11b76 -- ceph osd stat -f json 2026-03-10T09:13:45.055 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:13:44 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-2[70882]: 2026-03-10T09:13:44.883+0000 7f3a9c998740 -1 osd.2 0 log_to_monitors true 2026-03-10T09:13:45.105 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/mon.a/config 2026-03-10T09:13:45.373 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:45 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:45.373 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:45 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:45.373 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:45 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:13:45.373 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:45 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:13:45.373 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:45 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:45.373 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:45 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:13:45.373 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:45 vm09 ceph-mon[49644]: pgmap v22: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T09:13:45.373 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:45 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:45.373 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:45 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:45.373 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:45 vm09 ceph-mon[49644]: from='osd.2 [v2:192.168.123.109:6818/1690773368,v1:192.168.123.109:6819/1690773368]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T09:13:45.373 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:13:45.437 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":12,"num_osds":3,"num_up_osds":2,"osd_up_since":1773134018,"num_in_osds":3,"osd_in_since":1773134017,"num_remapped_pgs":0} 2026-03-10T09:13:46.438 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 349a7c12-1c61-11f1-8c28-6d0db3d11b76 -- ceph osd stat -f json 2026-03-10T09:13:46.613 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/mon.a/config 2026-03-10T09:13:46.639 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:46 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/3709079163' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:13:46.639 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:46 vm09 ceph-mon[49644]: from='osd.2 [v2:192.168.123.109:6818/1690773368,v1:192.168.123.109:6819/1690773368]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T09:13:46.639 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:46 vm09 ceph-mon[49644]: osdmap e13: 3 total, 2 up, 3 in 2026-03-10T09:13:46.639 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:46 vm09 ceph-mon[49644]: from='osd.2 [v2:192.168.123.109:6818/1690773368,v1:192.168.123.109:6819/1690773368]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T09:13:46.639 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:46 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:13:46.639 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:46 vm09 ceph-mon[49644]: Detected new or changed devices on vm09 2026-03-10T09:13:46.639 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:46 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:46.639 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:46 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:46.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:46 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-10T09:13:46.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:46 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-10T09:13:46.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:46 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-10T09:13:46.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:46 vm09 ceph-mon[49644]: Adjusting osd_memory_target on vm09 to 87737k 2026-03-10T09:13:46.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:46 vm09 ceph-mon[49644]: Unable to set osd_memory_target on vm09 to 89843575: error parsing value: Value '89843575' is below minimum 939524096 2026-03-10T09:13:46.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:46 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:13:46.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:46 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:13:46.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:46 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:13:46.846 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:13:46.908 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":14,"num_osds":3,"num_up_osds":2,"osd_up_since":1773134018,"num_in_osds":3,"osd_in_since":1773134017,"num_remapped_pgs":0} 2026-03-10T09:13:47.909 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 349a7c12-1c61-11f1-8c28-6d0db3d11b76 -- ceph osd stat -f json 2026-03-10T09:13:47.932 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:47 vm09 ceph-mon[49644]: pgmap v24: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T09:13:47.932 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:47 vm09 ceph-mon[49644]: from='osd.2 [v2:192.168.123.109:6818/1690773368,v1:192.168.123.109:6819/1690773368]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T09:13:47.932 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:47 vm09 ceph-mon[49644]: osdmap e14: 3 total, 2 up, 3 in 2026-03-10T09:13:47.932 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:47 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:13:47.932 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:47 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/1895105048' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:13:47.932 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:13:47 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-2[70882]: 2026-03-10T09:13:47.703+0000 7f3a9912c640 -1 osd.2 0 waiting for initial osdmap 2026-03-10T09:13:47.932 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:13:47 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-2[70882]: 2026-03-10T09:13:47.709+0000 7f3a93f42640 -1 osd.2 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T09:13:48.092 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/mon.a/config 2026-03-10T09:13:48.344 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:13:48.416 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":14,"num_osds":3,"num_up_osds":2,"osd_up_since":1773134018,"num_in_osds":3,"osd_in_since":1773134017,"num_remapped_pgs":0} 2026-03-10T09:13:49.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:48 vm09 ceph-mon[49644]: purged_snaps scrub starts 2026-03-10T09:13:49.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:48 vm09 ceph-mon[49644]: purged_snaps scrub ok 2026-03-10T09:13:49.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:48 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:13:49.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:48 vm09 ceph-mon[49644]: from='osd.2 [v2:192.168.123.109:6818/1690773368,v1:192.168.123.109:6819/1690773368]' entity='osd.2' 2026-03-10T09:13:49.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:48 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/4098725997' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:13:49.417 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 349a7c12-1c61-11f1-8c28-6d0db3d11b76 -- ceph osd stat -f json 2026-03-10T09:13:49.599 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/mon.a/config 2026-03-10T09:13:49.716 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:49 vm09 ceph-mon[49644]: pgmap v26: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T09:13:49.716 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:49 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:13:49.716 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:49 vm09 ceph-mon[49644]: osd.2 [v2:192.168.123.109:6818/1690773368,v1:192.168.123.109:6819/1690773368] boot 2026-03-10T09:13:49.716 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:49 vm09 ceph-mon[49644]: osdmap e15: 3 total, 3 up, 3 in 2026-03-10T09:13:49.716 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:49 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:13:49.832 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:13:49.907 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":15,"num_osds":3,"num_up_osds":3,"osd_up_since":1773134028,"num_in_osds":3,"osd_in_since":1773134017,"num_remapped_pgs":0} 2026-03-10T09:13:49.907 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 349a7c12-1c61-11f1-8c28-6d0db3d11b76 -- ceph osd dump --format=json 2026-03-10T09:13:50.089 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/mon.a/config 2026-03-10T09:13:50.302 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:13:50.302 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":15,"fsid":"349a7c12-1c61-11f1-8c28-6d0db3d11b76","created":"2026-03-10T09:12:42.416608+0000","modified":"2026-03-10T09:13:48.705812+0000","last_up_change":"2026-03-10T09:13:48.705812+0000","last_in_change":"2026-03-10T09:13:37.110545+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":8,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":0,"max_osd":3,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[],"osds":[{"osd":0,"uuid":"d0268d12-2d91-4c58-847f-4481a225bb98","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6802","nonce":2648938696},{"type":"v1","addr":"192.168.123.109:6803","nonce":2648938696}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6804","nonce":2648938696},{"type":"v1","addr":"192.168.123.109:6805","nonce":2648938696}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6808","nonce":2648938696},{"type":"v1","addr":"192.168.123.109:6809","nonce":2648938696}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6806","nonce":2648938696},{"type":"v1","addr":"192.168.123.109:6807","nonce":2648938696}]},"public_addr":"192.168.123.109:6803/2648938696","cluster_addr":"192.168.123.109:6805/2648938696","heartbeat_back_addr":"192.168.123.109:6809/2648938696","heartbeat_front_addr":"192.168.123.109:6807/2648938696","state":["exists","up"]},{"osd":1,"uuid":"9a4cc04f-8019-4083-b136-60d601e0d497","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":12,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6810","nonce":2434513999},{"type":"v1","addr":"192.168.123.109:6811","nonce":2434513999}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6812","nonce":2434513999},{"type":"v1","addr":"192.168.123.109:6813","nonce":2434513999}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6816","nonce":2434513999},{"type":"v1","addr":"192.168.123.109:6817","nonce":2434513999}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6814","nonce":2434513999},{"type":"v1","addr":"192.168.123.109:6815","nonce":2434513999}]},"public_addr":"192.168.123.109:6811/2434513999","cluster_addr":"192.168.123.109:6813/2434513999","heartbeat_back_addr":"192.168.123.109:6817/2434513999","heartbeat_front_addr":"192.168.123.109:6815/2434513999","state":["exists","up"]},{"osd":2,"uuid":"a2664302-47b2-48a9-ac35-65f3bc5a6c6e","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":15,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6818","nonce":1690773368},{"type":"v1","addr":"192.168.123.109:6819","nonce":1690773368}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6820","nonce":1690773368},{"type":"v1","addr":"192.168.123.109:6821","nonce":1690773368}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6824","nonce":1690773368},{"type":"v1","addr":"192.168.123.109:6825","nonce":1690773368}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6822","nonce":1690773368},{"type":"v1","addr":"192.168.123.109:6823","nonce":1690773368}]},"public_addr":"192.168.123.109:6819/1690773368","cluster_addr":"192.168.123.109:6821/1690773368","heartbeat_back_addr":"192.168.123.109:6825/1690773368","heartbeat_front_addr":"192.168.123.109:6823/1690773368","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.109:0/1989739592":"2026-03-11T09:13:04.620297+0000","192.168.123.109:6801/1679320120":"2026-03-11T09:13:04.620297+0000","192.168.123.109:6800/1679320120":"2026-03-11T09:13:04.620297+0000","192.168.123.109:0/1131459195":"2026-03-11T09:13:04.620297+0000","192.168.123.109:6801/2573242556":"2026-03-11T09:12:53.422589+0000","192.168.123.109:6800/2573242556":"2026-03-11T09:12:53.422589+0000","192.168.123.109:0/397778724":"2026-03-11T09:12:53.422589+0000","192.168.123.109:0/3280768865":"2026-03-11T09:13:04.620297+0000","192.168.123.109:0/2622392915":"2026-03-11T09:12:53.422589+0000","192.168.123.109:0/1236928792":"2026-03-11T09:12:53.422589+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T09:13:50.370 INFO:tasks.cephadm.ceph_manager.ceph:[] 2026-03-10T09:13:50.370 INFO:tasks.cephadm:Setting up client nodes... 2026-03-10T09:13:50.370 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 349a7c12-1c61-11f1-8c28-6d0db3d11b76 -- ceph auth get-or-create client.0 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-10T09:13:50.540 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/mon.a/config 2026-03-10T09:13:50.808 INFO:teuthology.orchestra.run.vm09.stdout:[client.0] 2026-03-10T09:13:50.808 INFO:teuthology.orchestra.run.vm09.stdout: key = AQDO4K9pjcgvMBAAKU0VvRyTIHv6J+e1ao9Tfg== 2026-03-10T09:13:50.860 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T09:13:50.860 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/ceph/ceph.client.0.keyring 2026-03-10T09:13:50.860 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod 0644 /etc/ceph/ceph.client.0.keyring 2026-03-10T09:13:50.894 INFO:tasks.ceph:Waiting until ceph daemons up and pgs clean... 2026-03-10T09:13:50.894 INFO:tasks.cephadm.ceph_manager.ceph:waiting for mgr available 2026-03-10T09:13:50.894 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 349a7c12-1c61-11f1-8c28-6d0db3d11b76 -- ceph mgr dump --format=json 2026-03-10T09:13:50.955 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:50 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/77613911' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:13:50.956 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:50 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/2542442026' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T09:13:51.099 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/mon.a/config 2026-03-10T09:13:51.351 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:13:51.419 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":12,"flags":0,"active_gid":14150,"active_name":"a","active_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6800","nonce":2314175783},{"type":"v1","addr":"192.168.123.109:6801","nonce":2314175783}]},"active_addr":"192.168.123.109:6801/2314175783","active_change":"2026-03-10T09:13:04.620467+0000","active_mgr_features":4540701547738038271,"available":true,"standbys":[],"modules":["cephadm","dashboard","iostat","nfs","restful"],"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}],"services":{"dashboard":"https://192.168.123.109:8443/"},"always_on_modules":{"octopus":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"pacific":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"quincy":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"reef":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"squid":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"]},"force_disabled_modules":{},"last_failure_osd_epoch":3,"active_clients":[{"name":"libcephsqlite","addrvec":[{"type":"v2","addr":"192.168.123.109:0","nonce":756098690}]},{"name":"rbd_support","addrvec":[{"type":"v2","addr":"192.168.123.109:0","nonce":1891302633}]},{"name":"volumes","addrvec":[{"type":"v2","addr":"192.168.123.109:0","nonce":4287105925}]}]} 2026-03-10T09:13:51.420 INFO:tasks.cephadm.ceph_manager.ceph:mgr available! 2026-03-10T09:13:51.420 INFO:tasks.cephadm.ceph_manager.ceph:waiting for all up 2026-03-10T09:13:51.420 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 349a7c12-1c61-11f1-8c28-6d0db3d11b76 -- ceph osd dump --format=json 2026-03-10T09:13:51.630 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/mon.a/config 2026-03-10T09:13:51.747 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:51 vm09 ceph-mon[49644]: pgmap v28: 0 pgs: ; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T09:13:51.748 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:51 vm09 ceph-mon[49644]: osdmap e16: 3 total, 3 up, 3 in 2026-03-10T09:13:51.748 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:51 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T09:13:51.748 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:51 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/2245487864' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T09:13:51.748 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:51 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/2245487864' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T09:13:51.748 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:51 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/3357056455' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T09:13:51.859 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:13:51.859 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":17,"fsid":"349a7c12-1c61-11f1-8c28-6d0db3d11b76","created":"2026-03-10T09:12:42.416608+0000","modified":"2026-03-10T09:13:51.665045+0000","last_up_change":"2026-03-10T09:13:48.705812+0000","last_in_change":"2026-03-10T09:13:37.110545+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":8,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":3,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T09:13:50.674161+0000","flags":32769,"flags_names":"hashpspool,creating","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"17","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{},"read_balance":{"score_type":"Fair distribution","score_acting":3,"score_stable":3,"optimal_score":1,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"d0268d12-2d91-4c58-847f-4481a225bb98","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6802","nonce":2648938696},{"type":"v1","addr":"192.168.123.109:6803","nonce":2648938696}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6804","nonce":2648938696},{"type":"v1","addr":"192.168.123.109:6805","nonce":2648938696}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6808","nonce":2648938696},{"type":"v1","addr":"192.168.123.109:6809","nonce":2648938696}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6806","nonce":2648938696},{"type":"v1","addr":"192.168.123.109:6807","nonce":2648938696}]},"public_addr":"192.168.123.109:6803/2648938696","cluster_addr":"192.168.123.109:6805/2648938696","heartbeat_back_addr":"192.168.123.109:6809/2648938696","heartbeat_front_addr":"192.168.123.109:6807/2648938696","state":["exists","up"]},{"osd":1,"uuid":"9a4cc04f-8019-4083-b136-60d601e0d497","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":12,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6810","nonce":2434513999},{"type":"v1","addr":"192.168.123.109:6811","nonce":2434513999}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6812","nonce":2434513999},{"type":"v1","addr":"192.168.123.109:6813","nonce":2434513999}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6816","nonce":2434513999},{"type":"v1","addr":"192.168.123.109:6817","nonce":2434513999}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6814","nonce":2434513999},{"type":"v1","addr":"192.168.123.109:6815","nonce":2434513999}]},"public_addr":"192.168.123.109:6811/2434513999","cluster_addr":"192.168.123.109:6813/2434513999","heartbeat_back_addr":"192.168.123.109:6817/2434513999","heartbeat_front_addr":"192.168.123.109:6815/2434513999","state":["exists","up"]},{"osd":2,"uuid":"a2664302-47b2-48a9-ac35-65f3bc5a6c6e","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":15,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6818","nonce":1690773368},{"type":"v1","addr":"192.168.123.109:6819","nonce":1690773368}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6820","nonce":1690773368},{"type":"v1","addr":"192.168.123.109:6821","nonce":1690773368}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6824","nonce":1690773368},{"type":"v1","addr":"192.168.123.109:6825","nonce":1690773368}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6822","nonce":1690773368},{"type":"v1","addr":"192.168.123.109:6823","nonce":1690773368}]},"public_addr":"192.168.123.109:6819/1690773368","cluster_addr":"192.168.123.109:6821/1690773368","heartbeat_back_addr":"192.168.123.109:6825/1690773368","heartbeat_front_addr":"192.168.123.109:6823/1690773368","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T09:13:24.587643+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T09:13:35.126733+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T09:13:45.924744+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.109:0/1989739592":"2026-03-11T09:13:04.620297+0000","192.168.123.109:6801/1679320120":"2026-03-11T09:13:04.620297+0000","192.168.123.109:6800/1679320120":"2026-03-11T09:13:04.620297+0000","192.168.123.109:0/1131459195":"2026-03-11T09:13:04.620297+0000","192.168.123.109:6801/2573242556":"2026-03-11T09:12:53.422589+0000","192.168.123.109:6800/2573242556":"2026-03-11T09:12:53.422589+0000","192.168.123.109:0/397778724":"2026-03-11T09:12:53.422589+0000","192.168.123.109:0/3280768865":"2026-03-11T09:13:04.620297+0000","192.168.123.109:0/2622392915":"2026-03-11T09:12:53.422589+0000","192.168.123.109:0/1236928792":"2026-03-11T09:12:53.422589+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T09:13:51.931 INFO:tasks.cephadm.ceph_manager.ceph:all up! 2026-03-10T09:13:51.931 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 349a7c12-1c61-11f1-8c28-6d0db3d11b76 -- ceph osd dump --format=json 2026-03-10T09:13:52.105 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/mon.a/config 2026-03-10T09:13:52.328 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:13:52.328 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":17,"fsid":"349a7c12-1c61-11f1-8c28-6d0db3d11b76","created":"2026-03-10T09:12:42.416608+0000","modified":"2026-03-10T09:13:51.665045+0000","last_up_change":"2026-03-10T09:13:48.705812+0000","last_in_change":"2026-03-10T09:13:37.110545+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":8,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":3,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T09:13:50.674161+0000","flags":32769,"flags_names":"hashpspool,creating","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"17","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{},"read_balance":{"score_type":"Fair distribution","score_acting":3,"score_stable":3,"optimal_score":1,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"d0268d12-2d91-4c58-847f-4481a225bb98","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6802","nonce":2648938696},{"type":"v1","addr":"192.168.123.109:6803","nonce":2648938696}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6804","nonce":2648938696},{"type":"v1","addr":"192.168.123.109:6805","nonce":2648938696}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6808","nonce":2648938696},{"type":"v1","addr":"192.168.123.109:6809","nonce":2648938696}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6806","nonce":2648938696},{"type":"v1","addr":"192.168.123.109:6807","nonce":2648938696}]},"public_addr":"192.168.123.109:6803/2648938696","cluster_addr":"192.168.123.109:6805/2648938696","heartbeat_back_addr":"192.168.123.109:6809/2648938696","heartbeat_front_addr":"192.168.123.109:6807/2648938696","state":["exists","up"]},{"osd":1,"uuid":"9a4cc04f-8019-4083-b136-60d601e0d497","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":12,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6810","nonce":2434513999},{"type":"v1","addr":"192.168.123.109:6811","nonce":2434513999}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6812","nonce":2434513999},{"type":"v1","addr":"192.168.123.109:6813","nonce":2434513999}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6816","nonce":2434513999},{"type":"v1","addr":"192.168.123.109:6817","nonce":2434513999}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6814","nonce":2434513999},{"type":"v1","addr":"192.168.123.109:6815","nonce":2434513999}]},"public_addr":"192.168.123.109:6811/2434513999","cluster_addr":"192.168.123.109:6813/2434513999","heartbeat_back_addr":"192.168.123.109:6817/2434513999","heartbeat_front_addr":"192.168.123.109:6815/2434513999","state":["exists","up"]},{"osd":2,"uuid":"a2664302-47b2-48a9-ac35-65f3bc5a6c6e","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":15,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6818","nonce":1690773368},{"type":"v1","addr":"192.168.123.109:6819","nonce":1690773368}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6820","nonce":1690773368},{"type":"v1","addr":"192.168.123.109:6821","nonce":1690773368}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6824","nonce":1690773368},{"type":"v1","addr":"192.168.123.109:6825","nonce":1690773368}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6822","nonce":1690773368},{"type":"v1","addr":"192.168.123.109:6823","nonce":1690773368}]},"public_addr":"192.168.123.109:6819/1690773368","cluster_addr":"192.168.123.109:6821/1690773368","heartbeat_back_addr":"192.168.123.109:6825/1690773368","heartbeat_front_addr":"192.168.123.109:6823/1690773368","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T09:13:24.587643+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T09:13:35.126733+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T09:13:45.924744+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.109:0/1989739592":"2026-03-11T09:13:04.620297+0000","192.168.123.109:6801/1679320120":"2026-03-11T09:13:04.620297+0000","192.168.123.109:6800/1679320120":"2026-03-11T09:13:04.620297+0000","192.168.123.109:0/1131459195":"2026-03-11T09:13:04.620297+0000","192.168.123.109:6801/2573242556":"2026-03-11T09:12:53.422589+0000","192.168.123.109:6800/2573242556":"2026-03-11T09:12:53.422589+0000","192.168.123.109:0/397778724":"2026-03-11T09:12:53.422589+0000","192.168.123.109:0/3280768865":"2026-03-11T09:13:04.620297+0000","192.168.123.109:0/2622392915":"2026-03-11T09:12:53.422589+0000","192.168.123.109:0/1236928792":"2026-03-11T09:12:53.422589+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T09:13:52.394 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 349a7c12-1c61-11f1-8c28-6d0db3d11b76 -- ceph tell osd.0 flush_pg_stats 2026-03-10T09:13:52.394 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 349a7c12-1c61-11f1-8c28-6d0db3d11b76 -- ceph tell osd.1 flush_pg_stats 2026-03-10T09:13:52.394 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 349a7c12-1c61-11f1-8c28-6d0db3d11b76 -- ceph tell osd.2 flush_pg_stats 2026-03-10T09:13:52.649 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/mon.a/config 2026-03-10T09:13:52.665 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/mon.a/config 2026-03-10T09:13:52.672 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:52 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T09:13:52.672 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:52 vm09 ceph-mon[49644]: osdmap e17: 3 total, 3 up, 3 in 2026-03-10T09:13:52.672 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:52 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T09:13:52.672 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:52 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/2731276657' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T09:13:52.672 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:52 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/2912650903' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T09:13:52.809 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/mon.a/config 2026-03-10T09:13:52.928 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:52 vm09 sudo[75240]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda 2026-03-10T09:13:52.928 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:13:52 vm09 sudo[75200]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vde 2026-03-10T09:13:52.928 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:13:52 vm09 sudo[75200]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-10T09:13:52.928 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:13:52 vm09 sudo[75200]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-10T09:13:52.928 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:13:52 vm09 sudo[75200]: pam_unix(sudo:session): session closed for user root 2026-03-10T09:13:52.928 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:13:52 vm09 sudo[75209]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vdd 2026-03-10T09:13:52.928 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:13:52 vm09 sudo[75209]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-10T09:13:52.928 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:13:52 vm09 sudo[75209]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-10T09:13:52.928 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:13:52 vm09 sudo[75209]: pam_unix(sudo:session): session closed for user root 2026-03-10T09:13:52.928 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:13:52 vm09 sudo[75224]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vdc 2026-03-10T09:13:52.928 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:13:52 vm09 sudo[75224]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-10T09:13:52.928 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:13:52 vm09 sudo[75224]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-10T09:13:52.928 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:13:52 vm09 sudo[75224]: pam_unix(sudo:session): session closed for user root 2026-03-10T09:13:53.096 INFO:teuthology.orchestra.run.vm09.stdout:64424509442 2026-03-10T09:13:53.096 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 349a7c12-1c61-11f1-8c28-6d0db3d11b76 -- ceph osd last-stat-seq osd.2 2026-03-10T09:13:53.152 INFO:teuthology.orchestra.run.vm09.stdout:34359738375 2026-03-10T09:13:53.152 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 349a7c12-1c61-11f1-8c28-6d0db3d11b76 -- ceph osd last-stat-seq osd.0 2026-03-10T09:13:53.218 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:52 vm09 sudo[75240]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-10T09:13:53.219 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:52 vm09 sudo[75240]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-10T09:13:53.219 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:52 vm09 sudo[75240]: pam_unix(sudo:session): session closed for user root 2026-03-10T09:13:53.304 INFO:teuthology.orchestra.run.vm09.stdout:51539607556 2026-03-10T09:13:53.304 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 349a7c12-1c61-11f1-8c28-6d0db3d11b76 -- ceph osd last-stat-seq osd.1 2026-03-10T09:13:53.365 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/mon.a/config 2026-03-10T09:13:53.472 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/mon.a/config 2026-03-10T09:13:53.693 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/mon.a/config 2026-03-10T09:13:53.735 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:53 vm09 ceph-mon[49644]: pgmap v31: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T09:13:53.735 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:53 vm09 ceph-mon[49644]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T09:13:53.735 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:53 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T09:13:53.735 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:53 vm09 ceph-mon[49644]: osdmap e18: 3 total, 3 up, 3 in 2026-03-10T09:13:53.735 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:53 vm09 ceph-mon[49644]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T09:13:53.735 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:53 vm09 ceph-mon[49644]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T09:13:53.735 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:53 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T09:13:53.740 INFO:teuthology.orchestra.run.vm09.stdout:64424509441 2026-03-10T09:13:53.853 INFO:tasks.cephadm.ceph_manager.ceph:need seq 64424509442 got 64424509441 for osd.2 2026-03-10T09:13:53.867 INFO:teuthology.orchestra.run.vm09.stdout:34359738374 2026-03-10T09:13:53.956 INFO:tasks.cephadm.ceph_manager.ceph:need seq 34359738375 got 34359738374 for osd.0 2026-03-10T09:13:53.969 INFO:teuthology.orchestra.run.vm09.stdout:51539607555 2026-03-10T09:13:54.018 INFO:tasks.cephadm.ceph_manager.ceph:need seq 51539607556 got 51539607555 for osd.1 2026-03-10T09:13:54.853 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 349a7c12-1c61-11f1-8c28-6d0db3d11b76 -- ceph osd last-stat-seq osd.2 2026-03-10T09:13:54.957 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 349a7c12-1c61-11f1-8c28-6d0db3d11b76 -- ceph osd last-stat-seq osd.0 2026-03-10T09:13:54.984 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:54 vm09 ceph-mon[49644]: osdmap e19: 3 total, 3 up, 3 in 2026-03-10T09:13:54.984 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:54 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/1087153370' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T09:13:54.984 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:54 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/388439006' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T09:13:54.984 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:54 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/2092559993' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T09:13:55.019 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 349a7c12-1c61-11f1-8c28-6d0db3d11b76 -- ceph osd last-stat-seq osd.1 2026-03-10T09:13:55.027 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/mon.a/config 2026-03-10T09:13:55.204 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/mon.a/config 2026-03-10T09:13:55.352 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/mon.a/config 2026-03-10T09:13:55.380 INFO:teuthology.orchestra.run.vm09.stdout:64424509442 2026-03-10T09:13:55.463 INFO:tasks.cephadm.ceph_manager.ceph:need seq 64424509442 got 64424509442 for osd.2 2026-03-10T09:13:55.463 DEBUG:teuthology.parallel:result is None 2026-03-10T09:13:55.545 INFO:teuthology.orchestra.run.vm09.stdout:34359738375 2026-03-10T09:13:55.612 INFO:tasks.cephadm.ceph_manager.ceph:need seq 34359738375 got 34359738375 for osd.0 2026-03-10T09:13:55.613 DEBUG:teuthology.parallel:result is None 2026-03-10T09:13:55.657 INFO:teuthology.orchestra.run.vm09.stdout:51539607557 2026-03-10T09:13:55.706 INFO:tasks.cephadm.ceph_manager.ceph:need seq 51539607556 got 51539607557 for osd.1 2026-03-10T09:13:55.706 DEBUG:teuthology.parallel:result is None 2026-03-10T09:13:55.706 INFO:tasks.cephadm.ceph_manager.ceph:waiting for clean 2026-03-10T09:13:55.706 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 349a7c12-1c61-11f1-8c28-6d0db3d11b76 -- ceph pg dump --format=json 2026-03-10T09:13:55.876 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/mon.a/config 2026-03-10T09:13:55.890 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:55 vm09 ceph-mon[49644]: pgmap v34: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-10T09:13:55.890 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:55 vm09 ceph-mon[49644]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-10T09:13:55.890 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:55 vm09 ceph-mon[49644]: Cluster is now healthy 2026-03-10T09:13:55.890 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:55 vm09 ceph-mon[49644]: mgrmap e13: a(active, since 50s) 2026-03-10T09:13:55.890 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:55 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/1102429574' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T09:13:55.890 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:55 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/2133655005' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T09:13:55.890 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:55 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/1662264352' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T09:13:56.097 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:13:56.097 INFO:teuthology.orchestra.run.vm09.stderr:dumped all 2026-03-10T09:13:56.150 INFO:teuthology.orchestra.run.vm09.stdout:{"pg_ready":true,"pg_map":{"version":34,"stamp":"2026-03-10T09:13:54.636513+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":2,"num_osds":3,"num_per_pool_osds":3,"num_per_pool_omap_osds":3,"kb":62902272,"kb_used":81872,"kb_used_data":1224,"kb_used_omap":4,"kb_used_meta":80443,"kb_avail":62820400,"statfs":{"total":64411926528,"available":64328089600,"internally_reserved":0,"allocated":1253376,"data_stored":919847,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":4770,"internal_metadata":82373982},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":2,"apply_latency_ms":2,"commit_latency_ns":2000000,"apply_latency_ns":2000000},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"0.958499"},"pg_stats":[{"pgid":"1.0","version":"18'32","reported_seq":57,"reported_epoch":19,"state":"active+clean","last_fresh":"2026-03-10T09:13:53.678654+0000","last_change":"2026-03-10T09:13:52.686865+0000","last_active":"2026-03-10T09:13:53.678654+0000","last_peered":"2026-03-10T09:13:53.678654+0000","last_clean":"2026-03-10T09:13:53.678654+0000","last_became_active":"2026-03-10T09:13:52.686742+0000","last_became_peered":"2026-03-10T09:13:52.686742+0000","last_unstale":"2026-03-10T09:13:53.678654+0000","last_undegraded":"2026-03-10T09:13:53.678654+0000","last_fullsized":"2026-03-10T09:13:53.678654+0000","mapping_epoch":17,"log_start":"0'0","ondisk_log_start":"0'0","created":17,"last_epoch_clean":18,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T09:13:51.665045+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T09:13:51.665045+0000","last_clean_scrub_stamp":"2026-03-10T09:13:51.665045+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:33:24.415268+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0,2],"acting":[1,0,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]}],"pool_stats":[{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":3}],"osd_stats":[{"osd":2,"up_from":15,"seq":64424509442,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27316,"kb_used_data":476,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940108,"statfs":{"total":21470642176,"available":21442670592,"internally_reserved":0,"allocated":487424,"data_stored":373512,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":1,"apply_latency_ms":1,"commit_latency_ns":1000000,"apply_latency_ns":1000000},"alerts":[]},{"osd":1,"up_from":12,"seq":51539607557,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27568,"kb_used_data":604,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939856,"statfs":{"total":21470642176,"available":21442412544,"internally_reserved":0,"allocated":618496,"data_stored":504600,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":8,"seq":34359738375,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":26988,"kb_used_data":144,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940436,"statfs":{"total":21470642176,"available":21443006464,"internally_reserved":0,"allocated":147456,"data_stored":41735,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[1,2],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":1,"apply_latency_ms":1,"commit_latency_ns":1000000,"apply_latency_ns":1000000},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-10T09:13:56.151 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 349a7c12-1c61-11f1-8c28-6d0db3d11b76 -- ceph pg dump --format=json 2026-03-10T09:13:56.314 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/mon.a/config 2026-03-10T09:13:56.537 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:13:56.537 INFO:teuthology.orchestra.run.vm09.stderr:dumped all 2026-03-10T09:13:56.588 INFO:teuthology.orchestra.run.vm09.stdout:{"pg_ready":true,"pg_map":{"version":34,"stamp":"2026-03-10T09:13:54.636513+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":2,"num_osds":3,"num_per_pool_osds":3,"num_per_pool_omap_osds":3,"kb":62902272,"kb_used":81872,"kb_used_data":1224,"kb_used_omap":4,"kb_used_meta":80443,"kb_avail":62820400,"statfs":{"total":64411926528,"available":64328089600,"internally_reserved":0,"allocated":1253376,"data_stored":919847,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":4770,"internal_metadata":82373982},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":2,"apply_latency_ms":2,"commit_latency_ns":2000000,"apply_latency_ns":2000000},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"0.958499"},"pg_stats":[{"pgid":"1.0","version":"18'32","reported_seq":57,"reported_epoch":19,"state":"active+clean","last_fresh":"2026-03-10T09:13:53.678654+0000","last_change":"2026-03-10T09:13:52.686865+0000","last_active":"2026-03-10T09:13:53.678654+0000","last_peered":"2026-03-10T09:13:53.678654+0000","last_clean":"2026-03-10T09:13:53.678654+0000","last_became_active":"2026-03-10T09:13:52.686742+0000","last_became_peered":"2026-03-10T09:13:52.686742+0000","last_unstale":"2026-03-10T09:13:53.678654+0000","last_undegraded":"2026-03-10T09:13:53.678654+0000","last_fullsized":"2026-03-10T09:13:53.678654+0000","mapping_epoch":17,"log_start":"0'0","ondisk_log_start":"0'0","created":17,"last_epoch_clean":18,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T09:13:51.665045+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T09:13:51.665045+0000","last_clean_scrub_stamp":"2026-03-10T09:13:51.665045+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:33:24.415268+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0,2],"acting":[1,0,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]}],"pool_stats":[{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":3}],"osd_stats":[{"osd":2,"up_from":15,"seq":64424509442,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27316,"kb_used_data":476,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940108,"statfs":{"total":21470642176,"available":21442670592,"internally_reserved":0,"allocated":487424,"data_stored":373512,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":1,"apply_latency_ms":1,"commit_latency_ns":1000000,"apply_latency_ns":1000000},"alerts":[]},{"osd":1,"up_from":12,"seq":51539607557,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27568,"kb_used_data":604,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939856,"statfs":{"total":21470642176,"available":21442412544,"internally_reserved":0,"allocated":618496,"data_stored":504600,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":8,"seq":34359738375,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":26988,"kb_used_data":144,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940436,"statfs":{"total":21470642176,"available":21443006464,"internally_reserved":0,"allocated":147456,"data_stored":41735,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[1,2],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":1,"apply_latency_ms":1,"commit_latency_ns":1000000,"apply_latency_ns":1000000},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-10T09:13:56.589 INFO:tasks.cephadm.ceph_manager.ceph:clean! 2026-03-10T09:13:56.589 INFO:tasks.ceph:Waiting until ceph cluster ceph is healthy... 2026-03-10T09:13:56.589 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy 2026-03-10T09:13:56.589 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 349a7c12-1c61-11f1-8c28-6d0db3d11b76 -- ceph health --format=json 2026-03-10T09:13:56.688 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:56 vm09 ceph-mon[49644]: from='client.14250 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T09:13:56.764 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/mon.a/config 2026-03-10T09:13:56.998 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:13:56.999 INFO:teuthology.orchestra.run.vm09.stdout:{"status":"HEALTH_OK","checks":{},"mutes":[]} 2026-03-10T09:13:57.068 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy done 2026-03-10T09:13:57.068 INFO:tasks.cephadm:Setup complete, yielding 2026-03-10T09:13:57.068 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-10T09:13:57.070 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm09.local 2026-03-10T09:13:57.070 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 349a7c12-1c61-11f1-8c28-6d0db3d11b76 -- bash -c 'ceph osd pool create foo' 2026-03-10T09:13:57.237 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/mon.a/config 2026-03-10T09:13:57.690 INFO:teuthology.orchestra.run.vm09.stderr:pool 'foo' created 2026-03-10T09:13:57.751 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 349a7c12-1c61-11f1-8c28-6d0db3d11b76 -- bash -c 'rbd pool init foo' 2026-03-10T09:13:57.919 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/mon.a/config 2026-03-10T09:13:57.941 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:57 vm09 ceph-mon[49644]: from='client.14252 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T09:13:57.941 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:57 vm09 ceph-mon[49644]: pgmap v35: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-10T09:13:57.941 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:57 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/2232949430' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T09:13:57.941 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:57 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/3184621767' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "foo"}]: dispatch 2026-03-10T09:13:59.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:58 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/3184621767' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "foo"}]': finished 2026-03-10T09:13:59.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:58 vm09 ceph-mon[49644]: osdmap e20: 3 total, 3 up, 3 in 2026-03-10T09:13:59.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:58 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/2656891967' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "foo","app": "rbd"}]: dispatch 2026-03-10T09:13:59.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:58 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/2656891967' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "foo","app": "rbd"}]': finished 2026-03-10T09:13:59.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:58 vm09 ceph-mon[49644]: osdmap e21: 3 total, 3 up, 3 in 2026-03-10T09:14:00.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:59 vm09 ceph-mon[49644]: pgmap v37: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T09:14:00.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:13:59 vm09 ceph-mon[49644]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T09:14:01.003 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 349a7c12-1c61-11f1-8c28-6d0db3d11b76 -- bash -c 'ceph orch apply iscsi foo u p' 2026-03-10T09:14:01.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:00 vm09 ceph-mon[49644]: osdmap e22: 3 total, 3 up, 3 in 2026-03-10T09:14:01.186 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/mon.a/config 2026-03-10T09:14:01.526 INFO:teuthology.orchestra.run.vm09.stdout:Scheduled iscsi.foo update... 2026-03-10T09:14:01.645 INFO:teuthology.run_tasks:Running task workunit... 2026-03-10T09:14:01.649 INFO:tasks.workunit:Pulling workunits from ref 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b 2026-03-10T09:14:01.649 INFO:tasks.workunit:Making a separate scratch dir for every client... 2026-03-10T09:14:01.649 DEBUG:teuthology.orchestra.run.vm09:> stat -- /home/ubuntu/cephtest/mnt.0 2026-03-10T09:14:01.668 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T09:14:01.669 INFO:teuthology.orchestra.run.vm09.stderr:stat: cannot statx '/home/ubuntu/cephtest/mnt.0': No such file or directory 2026-03-10T09:14:01.669 DEBUG:teuthology.orchestra.run.vm09:> mkdir -- /home/ubuntu/cephtest/mnt.0 2026-03-10T09:14:01.727 INFO:tasks.workunit:Created dir /home/ubuntu/cephtest/mnt.0 2026-03-10T09:14:01.727 DEBUG:teuthology.orchestra.run.vm09:> cd -- /home/ubuntu/cephtest/mnt.0 && mkdir -- client.0 2026-03-10T09:14:01.809 INFO:tasks.workunit:timeout=3h 2026-03-10T09:14:01.809 INFO:tasks.workunit:cleanup=True 2026-03-10T09:14:01.809 DEBUG:teuthology.orchestra.run.vm09:> rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone https://github.com/kshtsk/ceph.git /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0 && git checkout 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b 2026-03-10T09:14:01.810 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:01 vm09 ceph-mon[49644]: pgmap v40: 33 pgs: 21 active+clean, 12 unknown; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T09:14:01.811 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:01 vm09 ceph-mon[49644]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-10T09:14:01.811 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:01 vm09 ceph-mon[49644]: Cluster is now healthy 2026-03-10T09:14:01.811 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:01 vm09 ceph-mon[49644]: osdmap e23: 3 total, 3 up, 3 in 2026-03-10T09:14:01.811 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:01 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:14:01.811 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:01 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:14:01.811 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:01 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:14:01.811 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:01 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:14:01.811 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:01 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:14:01.811 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:01 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm09.zsyrqw", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T09:14:01.811 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:01 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm09.zsyrqw", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-10T09:14:01.811 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:01 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:14:01.833 INFO:tasks.workunit.client.0.vm09.stderr:Cloning into '/home/ubuntu/cephtest/clone.client.0'... 2026-03-10T09:14:02.917 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:02 vm09 ceph-mon[49644]: from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "foo", "api_user": "u", "api_password": "p", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:14:02.918 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:02 vm09 ceph-mon[49644]: Saving service iscsi.foo spec with placement count:1 2026-03-10T09:14:02.918 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:02 vm09 ceph-mon[49644]: Deploying daemon iscsi.foo.vm09.zsyrqw on vm09 2026-03-10T09:14:02.918 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:02 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:14:02.918 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:02 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:14:02.918 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:02 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:14:02.918 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:02 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:14:02.918 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:02 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:14:04.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:03 vm09 ceph-mon[49644]: Checking pool "foo" exists for service iscsi.foo 2026-03-10T09:14:04.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:03 vm09 ceph-mon[49644]: pgmap v42: 33 pgs: 33 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-10T09:14:04.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:03 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/2597097526' entity='client.iscsi.foo.vm09.zsyrqw' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T09:14:04.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:03 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/3692951226' entity='client.iscsi.foo.vm09.zsyrqw' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.109:0/1989739592"}]: dispatch 2026-03-10T09:14:04.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:03 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:14:04.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:03 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:14:04.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:03 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:14:04.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:03 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:14:04.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:03 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:14:04.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:03 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T09:14:04.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:03 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T09:14:04.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:03 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:14:04.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:03 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm09"}]: dispatch 2026-03-10T09:14:04.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:03 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:14:04.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:03 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:14:04.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:03 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:14:04.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:03 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:14:04.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:03 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:14:04.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:03 vm09 ceph-mon[49644]: mgrmap e14: a(active, since 58s) 2026-03-10T09:14:05.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:04 vm09 ceph-mon[49644]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T09:14:05.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:04 vm09 ceph-mon[49644]: Adding iSCSI gateway http://:@192.168.123.109:5000 to Dashboard 2026-03-10T09:14:05.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:04 vm09 ceph-mon[49644]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T09:14:05.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:04 vm09 ceph-mon[49644]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm09"}]: dispatch 2026-03-10T09:14:05.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:04 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/3692951226' entity='client.iscsi.foo.vm09.zsyrqw' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.109:0/1989739592"}]': finished 2026-03-10T09:14:05.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:04 vm09 ceph-mon[49644]: osdmap e24: 3 total, 3 up, 3 in 2026-03-10T09:14:05.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:04 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/3292001689' entity='client.iscsi.foo.vm09.zsyrqw' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.109:6801/1679320120"}]: dispatch 2026-03-10T09:14:05.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:04 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:14:06.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:05 vm09 ceph-mon[49644]: pgmap v44: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T09:14:06.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:05 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/3292001689' entity='client.iscsi.foo.vm09.zsyrqw' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.109:6801/1679320120"}]': finished 2026-03-10T09:14:06.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:05 vm09 ceph-mon[49644]: osdmap e25: 3 total, 3 up, 3 in 2026-03-10T09:14:06.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:05 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/2009951555' entity='client.iscsi.foo.vm09.zsyrqw' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.109:6800/1679320120"}]: dispatch 2026-03-10T09:14:07.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:06 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/2009951555' entity='client.iscsi.foo.vm09.zsyrqw' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.109:6800/1679320120"}]': finished 2026-03-10T09:14:07.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:06 vm09 ceph-mon[49644]: osdmap e26: 3 total, 3 up, 3 in 2026-03-10T09:14:07.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:06 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/538222910' entity='client.iscsi.foo.vm09.zsyrqw' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.109:0/1131459195"}]: dispatch 2026-03-10T09:14:08.389 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:07 vm09 ceph-mon[49644]: pgmap v47: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 351 B/s rd, 527 B/s wr, 2 op/s 2026-03-10T09:14:08.389 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:07 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/538222910' entity='client.iscsi.foo.vm09.zsyrqw' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.109:0/1131459195"}]': finished 2026-03-10T09:14:08.389 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:07 vm09 ceph-mon[49644]: osdmap e27: 3 total, 3 up, 3 in 2026-03-10T09:14:08.389 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:07 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/2783173927' entity='client.iscsi.foo.vm09.zsyrqw' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.109:6801/2573242556"}]: dispatch 2026-03-10T09:14:09.389 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:08 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/2783173927' entity='client.iscsi.foo.vm09.zsyrqw' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.109:6801/2573242556"}]': finished 2026-03-10T09:14:09.389 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:08 vm09 ceph-mon[49644]: osdmap e28: 3 total, 3 up, 3 in 2026-03-10T09:14:09.389 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:08 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/232903364' entity='client.iscsi.foo.vm09.zsyrqw' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.109:6800/2573242556"}]: dispatch 2026-03-10T09:14:09.389 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:08 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/232903364' entity='client.iscsi.foo.vm09.zsyrqw' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.109:6800/2573242556"}]': finished 2026-03-10T09:14:09.389 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:08 vm09 ceph-mon[49644]: osdmap e29: 3 total, 3 up, 3 in 2026-03-10T09:14:09.389 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:08 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/1541763827' entity='client.iscsi.foo.vm09.zsyrqw' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.109:0/397778724"}]: dispatch 2026-03-10T09:14:10.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:09 vm09 ceph-mon[49644]: pgmap v51: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T09:14:10.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:09 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/1541763827' entity='client.iscsi.foo.vm09.zsyrqw' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.109:0/397778724"}]': finished 2026-03-10T09:14:10.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:09 vm09 ceph-mon[49644]: osdmap e30: 3 total, 3 up, 3 in 2026-03-10T09:14:10.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:09 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/3414389414' entity='client.iscsi.foo.vm09.zsyrqw' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.109:0/3280768865"}]: dispatch 2026-03-10T09:14:12.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:11 vm09 ceph-mon[49644]: pgmap v53: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T09:14:12.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:11 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/3414389414' entity='client.iscsi.foo.vm09.zsyrqw' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.109:0/3280768865"}]': finished 2026-03-10T09:14:12.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:11 vm09 ceph-mon[49644]: osdmap e31: 3 total, 3 up, 3 in 2026-03-10T09:14:12.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:11 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/289602090' entity='client.iscsi.foo.vm09.zsyrqw' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.109:0/2622392915"}]: dispatch 2026-03-10T09:14:13.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:12 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/289602090' entity='client.iscsi.foo.vm09.zsyrqw' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.109:0/2622392915"}]': finished 2026-03-10T09:14:13.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:12 vm09 ceph-mon[49644]: osdmap e32: 3 total, 3 up, 3 in 2026-03-10T09:14:13.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:12 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/1735212095' entity='client.iscsi.foo.vm09.zsyrqw' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.109:0/1236928792"}]: dispatch 2026-03-10T09:14:14.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:13 vm09 ceph-mon[49644]: pgmap v56: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T09:14:14.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:13 vm09 ceph-mon[49644]: from='client.? 192.168.123.109:0/1735212095' entity='client.iscsi.foo.vm09.zsyrqw' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.109:0/1236928792"}]': finished 2026-03-10T09:14:14.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:13 vm09 ceph-mon[49644]: osdmap e33: 3 total, 3 up, 3 in 2026-03-10T09:14:14.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:13 vm09 ceph-mon[49644]: from='client.14267 -' entity='client.iscsi.foo.vm09.zsyrqw' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T09:14:16.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:15 vm09 ceph-mon[49644]: pgmap v58: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T09:14:18.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:17 vm09 ceph-mon[49644]: pgmap v59: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T09:14:20.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:19 vm09 ceph-mon[49644]: pgmap v60: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 641 B/s rd, 0 op/s 2026-03-10T09:14:22.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:21 vm09 ceph-mon[49644]: pgmap v61: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T09:14:24.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:23 vm09 ceph-mon[49644]: pgmap v62: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T09:14:24.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:23 vm09 ceph-mon[49644]: from='client.14267 -' entity='client.iscsi.foo.vm09.zsyrqw' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T09:14:26.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:25 vm09 ceph-mon[49644]: pgmap v63: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 855 B/s rd, 0 op/s 2026-03-10T09:14:28.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:27 vm09 ceph-mon[49644]: pgmap v64: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T09:14:30.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:29 vm09 ceph-mon[49644]: pgmap v65: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T09:14:32.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:31 vm09 ceph-mon[49644]: pgmap v66: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T09:14:34.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:33 vm09 ceph-mon[49644]: pgmap v67: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T09:14:34.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:33 vm09 ceph-mon[49644]: from='client.14267 -' entity='client.iscsi.foo.vm09.zsyrqw' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T09:14:36.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:35 vm09 ceph-mon[49644]: pgmap v68: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T09:14:38.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:37 vm09 ceph-mon[49644]: pgmap v69: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T09:14:40.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:39 vm09 ceph-mon[49644]: pgmap v70: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T09:14:42.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:41 vm09 ceph-mon[49644]: pgmap v71: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T09:14:44.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:43 vm09 ceph-mon[49644]: pgmap v72: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T09:14:44.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:43 vm09 ceph-mon[49644]: from='client.14267 -' entity='client.iscsi.foo.vm09.zsyrqw' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T09:14:46.889 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:46 vm09 ceph-mon[49644]: pgmap v73: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T09:14:47.889 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:47 vm09 ceph-mon[49644]: pgmap v74: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T09:14:50.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:49 vm09 ceph-mon[49644]: pgmap v75: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T09:14:50.506 INFO:tasks.workunit.client.0.vm09.stderr:Note: switching to '75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b'. 2026-03-10T09:14:50.507 INFO:tasks.workunit.client.0.vm09.stderr: 2026-03-10T09:14:50.507 INFO:tasks.workunit.client.0.vm09.stderr:You are in 'detached HEAD' state. You can look around, make experimental 2026-03-10T09:14:50.507 INFO:tasks.workunit.client.0.vm09.stderr:changes and commit them, and you can discard any commits you make in this 2026-03-10T09:14:50.507 INFO:tasks.workunit.client.0.vm09.stderr:state without impacting any branches by switching back to a branch. 2026-03-10T09:14:50.507 INFO:tasks.workunit.client.0.vm09.stderr: 2026-03-10T09:14:50.507 INFO:tasks.workunit.client.0.vm09.stderr:If you want to create a new branch to retain commits you create, you may 2026-03-10T09:14:50.507 INFO:tasks.workunit.client.0.vm09.stderr:do so (now or later) by using -c with the switch command. Example: 2026-03-10T09:14:50.507 INFO:tasks.workunit.client.0.vm09.stderr: 2026-03-10T09:14:50.507 INFO:tasks.workunit.client.0.vm09.stderr: git switch -c 2026-03-10T09:14:50.507 INFO:tasks.workunit.client.0.vm09.stderr: 2026-03-10T09:14:50.507 INFO:tasks.workunit.client.0.vm09.stderr:Or undo this operation with: 2026-03-10T09:14:50.507 INFO:tasks.workunit.client.0.vm09.stderr: 2026-03-10T09:14:50.507 INFO:tasks.workunit.client.0.vm09.stderr: git switch - 2026-03-10T09:14:50.507 INFO:tasks.workunit.client.0.vm09.stderr: 2026-03-10T09:14:50.507 INFO:tasks.workunit.client.0.vm09.stderr:Turn off this advice by setting config variable advice.detachedHead to false 2026-03-10T09:14:50.507 INFO:tasks.workunit.client.0.vm09.stderr: 2026-03-10T09:14:50.507 INFO:tasks.workunit.client.0.vm09.stderr:HEAD is now at 75a68fd8ca3 qa/suites/orch/cephadm/osds: drop nvme_loop task 2026-03-10T09:14:50.512 DEBUG:teuthology.orchestra.run.vm09:> cd -- /home/ubuntu/cephtest/clone.client.0/qa/workunits && if test -e Makefile ; then make ; fi && find -executable -type f -printf '%P\0' >/home/ubuntu/cephtest/workunits.list.client.0 2026-03-10T09:14:50.569 INFO:tasks.workunit.client.0.vm09.stdout:for d in direct_io fs ; do ( cd $d ; make all ) ; done 2026-03-10T09:14:50.571 INFO:tasks.workunit.client.0.vm09.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/direct_io' 2026-03-10T09:14:50.571 INFO:tasks.workunit.client.0.vm09.stdout:cc -Wall -Wextra -D_GNU_SOURCE direct_io_test.c -o direct_io_test 2026-03-10T09:14:50.622 INFO:tasks.workunit.client.0.vm09.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_sync_io.c -o test_sync_io 2026-03-10T09:14:50.659 INFO:tasks.workunit.client.0.vm09.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_short_dio_read.c -o test_short_dio_read 2026-03-10T09:14:50.693 INFO:tasks.workunit.client.0.vm09.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/direct_io' 2026-03-10T09:14:50.695 INFO:tasks.workunit.client.0.vm09.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/fs' 2026-03-10T09:14:50.695 INFO:tasks.workunit.client.0.vm09.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_o_trunc.c -o test_o_trunc 2026-03-10T09:14:50.731 INFO:tasks.workunit.client.0.vm09.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/fs' 2026-03-10T09:14:50.734 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T09:14:50.734 DEBUG:teuthology.orchestra.run.vm09:> dd if=/home/ubuntu/cephtest/workunits.list.client.0 of=/dev/stdout 2026-03-10T09:14:50.791 INFO:tasks.workunit:Running workunits matching cephadm/test_iscsi_pids_limit.sh on client.0... 2026-03-10T09:14:50.792 INFO:tasks.workunit:Running workunit cephadm/test_iscsi_pids_limit.sh... 2026-03-10T09:14:50.792 DEBUG:teuthology.orchestra.run.vm09:workunit test cephadm/test_iscsi_pids_limit.sh> mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_iscsi_pids_limit.sh 2026-03-10T09:14:50.854 INFO:tasks.workunit.client.0.vm09.stderr:++ sudo podman ps -qa --filter=name=iscsi 2026-03-10T09:14:50.895 INFO:tasks.workunit.client.0.vm09.stderr:+ ISCSI_CONT_IDS='5c44fdcc3013 2026-03-10T09:14:50.895 INFO:tasks.workunit.client.0.vm09.stderr:c70f6c669aaf' 2026-03-10T09:14:50.895 INFO:tasks.workunit.client.0.vm09.stderr:++ echo 5c44fdcc3013 c70f6c669aaf 2026-03-10T09:14:50.895 INFO:tasks.workunit.client.0.vm09.stderr:++ wc -w 2026-03-10T09:14:50.897 INFO:tasks.workunit.client.0.vm09.stderr:+ CONT_COUNT=2 2026-03-10T09:14:50.897 INFO:tasks.workunit.client.0.vm09.stderr:+ test 2 -eq 2 2026-03-10T09:14:50.898 INFO:tasks.workunit.client.0.vm09.stderr:+ for i in ${ISCSI_CONT_IDS} 2026-03-10T09:14:50.898 INFO:tasks.workunit.client.0.vm09.stderr:++ sudo podman exec 5c44fdcc3013 cat /sys/fs/cgroup/pids/pids.max 2026-03-10T09:14:50.944 INFO:tasks.workunit.client.0.vm09.stderr:cat: /sys/fs/cgroup/pids/pids.max: No such file or directory 2026-03-10T09:14:51.001 INFO:tasks.workunit.client.0.vm09.stderr:+ '[' ']' 2026-03-10T09:14:51.001 INFO:tasks.workunit.client.0.vm09.stderr:++ sudo podman exec 5c44fdcc3013 cat /sys/fs/cgroup/pids.max 2026-03-10T09:14:51.102 INFO:tasks.workunit.client.0.vm09.stderr:+ '[' max ']' 2026-03-10T09:14:51.102 INFO:tasks.workunit.client.0.vm09.stderr:++ sudo podman exec 5c44fdcc3013 cat /sys/fs/cgroup/pids.max 2026-03-10T09:14:51.197 INFO:tasks.workunit.client.0.vm09.stderr:+ pid_limit=max 2026-03-10T09:14:51.197 INFO:tasks.workunit.client.0.vm09.stderr:+ test max == max 2026-03-10T09:14:51.197 INFO:tasks.workunit.client.0.vm09.stderr:+ for i in ${ISCSI_CONT_IDS} 2026-03-10T09:14:51.197 INFO:tasks.workunit.client.0.vm09.stderr:++ sudo podman exec c70f6c669aaf cat /sys/fs/cgroup/pids/pids.max 2026-03-10T09:14:51.240 INFO:tasks.workunit.client.0.vm09.stderr:cat: /sys/fs/cgroup/pids/pids.max: No such file or directory 2026-03-10T09:14:51.297 INFO:tasks.workunit.client.0.vm09.stderr:+ '[' ']' 2026-03-10T09:14:51.297 INFO:tasks.workunit.client.0.vm09.stderr:++ sudo podman exec c70f6c669aaf cat /sys/fs/cgroup/pids.max 2026-03-10T09:14:51.395 INFO:tasks.workunit.client.0.vm09.stderr:+ '[' max ']' 2026-03-10T09:14:51.395 INFO:tasks.workunit.client.0.vm09.stderr:++ sudo podman exec c70f6c669aaf cat /sys/fs/cgroup/pids.max 2026-03-10T09:14:51.491 INFO:tasks.workunit.client.0.vm09.stderr:+ pid_limit=max 2026-03-10T09:14:51.491 INFO:tasks.workunit.client.0.vm09.stderr:+ test max == max 2026-03-10T09:14:51.491 INFO:tasks.workunit.client.0.vm09.stderr:+ for i in ${ISCSI_CONT_IDS} 2026-03-10T09:14:51.491 INFO:tasks.workunit.client.0.vm09.stderr:+ sudo podman exec 5c44fdcc3013 /bin/sh -c 'for j in {0..20000}; do sleep 300 & done' 2026-03-10T09:14:52.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:51 vm09 ceph-mon[49644]: pgmap v76: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T09:14:54.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:53 vm09 ceph-mon[49644]: pgmap v77: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T09:14:54.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:53 vm09 ceph-mon[49644]: from='client.14267 -' entity='client.iscsi.foo.vm09.zsyrqw' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T09:14:56.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:55 vm09 ceph-mon[49644]: pgmap v78: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T09:14:58.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:57 vm09 ceph-mon[49644]: pgmap v79: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T09:15:00.139 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:14:59 vm09 ceph-mon[49644]: pgmap v80: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T09:15:02.606 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:15:02 vm09 ceph-mon[49644]: pgmap v81: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T09:15:02.619 INFO:tasks.workunit.client.0.vm09.stderr:+ for i in ${ISCSI_CONT_IDS} 2026-03-10T09:15:02.619 INFO:tasks.workunit.client.0.vm09.stderr:+ sudo podman exec c70f6c669aaf /bin/sh -c 'for j in {0..20000}; do sleep 300 & done' 2026-03-10T09:15:04.742 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:15:04 vm09 ceph-mon[49644]: pgmap v82: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T09:15:04.742 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:15:04 vm09 ceph-mon[49644]: from='client.14267 -' entity='client.iscsi.foo.vm09.zsyrqw' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T09:15:04.742 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:15:04 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:15:04.742 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:15:04 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:15:04.742 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:15:04 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:15:04.742 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:15:04 vm09 ceph-mon[49644]: from='mgr.14150 192.168.123.109:0/615298175' entity='mgr.a' 2026-03-10T09:15:05.833 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:15:05 vm09 ceph-mon[49644]: pgmap v83: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 844 B/s rd, 0 op/s 2026-03-10T09:15:25.913 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:15:24 vm09 systemd[1]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@mgr.a.service: A process of this unit has been killed by the OOM killer. 2026-03-10T09:15:26.364 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:15:26 vm09 ceph-mon[49644]: pgmap v84: 33 pgs: 33 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T09:15:26.639 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:15:26 vm09 podman[100764]: 2026-03-10 09:15:26.366303933 +0000 UTC m=+0.063169755 container died cb91c93e989df3422c96515fccc5c1f307598fa2c1fa65e212222edd13e99a32 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid) 2026-03-10T09:15:26.639 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:15:26 vm09 podman[100764]: 2026-03-10 09:15:26.491962087 +0000 UTC m=+0.188827909 container remove cb91c93e989df3422c96515fccc5c1f307598fa2c1fa65e212222edd13e99a32 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2) 2026-03-10T09:15:26.639 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:15:26 vm09 systemd[1]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@mgr.a.service: Main process exited, code=exited, status=137/n/a 2026-03-10T09:15:27.140 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:15:26 vm09 systemd[1]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@mgr.a.service: Failed with result 'exit-code'. 2026-03-10T09:15:27.140 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:15:26 vm09 systemd[1]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@mgr.a.service: Consumed 16.186s CPU time. 2026-03-10T09:15:33.166 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:15:32 vm09 systemd[1]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@osd.2.service: A process of this unit has been killed by the OOM killer. 2026-03-10T09:15:33.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:15:33 vm09 ceph-mon[49644]: osd.2 reported immediately failed by osd.0 2026-03-10T09:15:33.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:15:33 vm09 ceph-mon[49644]: osd.2 failed (root=default,host=vm09) (connection refused reported by osd.0) 2026-03-10T09:15:33.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:15:33 vm09 ceph-mon[49644]: osd.2 reported immediately failed by osd.1 2026-03-10T09:15:33.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:15:33 vm09 ceph-mon[49644]: osd.2 reported immediately failed by osd.1 2026-03-10T09:15:33.642 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:15:33 vm09 ceph-mon[49644]: osd.2 reported immediately failed by osd.0 2026-03-10T09:15:34.650 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:15:34 vm09 ceph-mon[49644]: Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T09:15:34.650 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:15:34 vm09 ceph-mon[49644]: osdmap e34: 3 total, 2 up, 3 in 2026-03-10T09:15:37.311 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:15:36 vm09 systemd[1]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@mgr.a.service: Scheduled restart job, restart counter is at 1. 2026-03-10T09:15:37.311 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:15:37 vm09 systemd[1]: Stopped Ceph mgr.a for 349a7c12-1c61-11f1-8c28-6d0db3d11b76. 2026-03-10T09:15:37.311 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:15:37 vm09 systemd[1]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@mgr.a.service: Consumed 16.186s CPU time. 2026-03-10T09:15:37.311 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:15:37 vm09 systemd[1]: Starting Ceph mgr.a for 349a7c12-1c61-11f1-8c28-6d0db3d11b76... 2026-03-10T09:15:50.146 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:15:47 vm09 systemd[1]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@osd.0.service: A process of this unit has been killed by the OOM killer. 2026-03-10T09:15:50.146 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:15:49 vm09 podman[102454]: 2026-03-10 09:15:49.81098797 +0000 UTC m=+15.778289684 container died b48612f15eee872815ce59e7c5723e15752939d9a53a511dd56bca707cefa1fa (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-10T09:15:52.148 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:15:51 vm09 ceph-mon[49644]: osdmap e35: 3 total, 2 up, 3 in 2026-03-10T09:15:52.148 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:15:51 vm09 podman[102454]: 2026-03-10 09:15:51.398228003 +0000 UTC m=+17.365529498 container remove b48612f15eee872815ce59e7c5723e15752939d9a53a511dd56bca707cefa1fa (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-2, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3) 2026-03-10T09:15:52.148 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:15:51 vm09 systemd[1]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@osd.2.service: Main process exited, code=exited, status=137/n/a 2026-03-10T09:16:04.896 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:03 vm09 systemd[1]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@osd.1.service: A process of this unit has been killed by the OOM killer. 2026-03-10T09:16:04.896 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:04 vm09 podman[102573]: 2026-03-10 09:16:04.553599339 +0000 UTC m=+12.322984873 container died 08d522fc9f7def72658b6bfcdf5164746a9abb1f0545fe7e0e957574ed02be52 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid) 2026-03-10T09:16:18.947 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:18 vm09 podman[102752]: 2026-03-10 09:16:18.788088378 +0000 UTC m=+11.845936094 container died 3927384a3f90d9d035b1f64e3a8f41912e612399703297f3484a9082add4945f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-1, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-10T09:16:18.947 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:18 vm09 podman[102752]: 2026-03-10 09:16:18.827590826 +0000 UTC m=+11.885438532 container remove 3927384a3f90d9d035b1f64e3a8f41912e612399703297f3484a9082add4945f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid) 2026-03-10T09:16:18.947 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:18 vm09 systemd[1]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@osd.1.service: Main process exited, code=exited, status=137/n/a 2026-03-10T09:16:18.947 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:18 vm09 podman[102617]: 2026-03-10 09:16:18.913883578 +0000 UTC m=+18.377382965 container remove 08d522fc9f7def72658b6bfcdf5164746a9abb1f0545fe7e0e957574ed02be52 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-10T09:16:19.483 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:18 vm09 systemd[1]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@osd.0.service: Main process exited, code=exited, status=137/n/a 2026-03-10T09:16:19.483 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:19 vm09 ceph-mon[49644]: Manager daemon a is unresponsive. No standby daemons available. 2026-03-10T09:16:19.483 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:19 vm09 ceph-mon[49644]: osdmap e36: 3 total, 2 up, 3 in 2026-03-10T09:16:19.483 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:19 vm09 ceph-mon[49644]: mgrmap e15: no daemons active (since 12s) 2026-03-10T09:16:36.608 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:29 vm09 systemd[1]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@mon.a.service: A process of this unit has been killed by the OOM killer. 2026-03-10T09:16:36.609 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:16:31 vm09 podman[102854]: 2026-03-10 09:16:31.312153078 +0000 UTC m=+12.380668265 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T09:16:36.609 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:16:36 vm09 podman[102854]: 2026-03-10 09:16:36.373213892 +0000 UTC m=+17.441729069 container create 10ce0f257fc871b44a453a3c39531564dbdc566e4c7b47d5565d110ae6996b4e (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-10T09:16:36.609 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:36 vm09 podman[102977]: 2026-03-10 09:16:36.064292819 +0000 UTC m=+17.018752936 container died 098843f55167c7e172389a65638e216bab6e90de7771a2eba638f118cbc10698 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mon-a, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-10T09:16:36.609 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:36 vm09 podman[102977]: 2026-03-10 09:16:36.362100632 +0000 UTC m=+17.316560749 container died 5c44fdcc30135691d3bea7b0bd85ac4ec60ddf329f30e651fabbf7627064fc97 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-iscsi-foo-vm09-zsyrqw, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0) 2026-03-10T09:16:36.609 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:34 vm09 podman[105658]: 2026-03-10 09:16:34.684261877 +0000 UTC m=+0.085097483 container create e4e59fb3d8d1a2a96ace408bb375d137f55860b873f8a2b05fd6c87dcc20f485 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-2-deactivate, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid) 2026-03-10T09:16:36.609 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:34 vm09 podman[105658]: 2026-03-10 09:16:34.634737231 +0000 UTC m=+0.035572837 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T09:16:36.609 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:36 vm09 podman[105658]: 2026-03-10 09:16:36.422837312 +0000 UTC m=+1.823672918 container init e4e59fb3d8d1a2a96ace408bb375d137f55860b873f8a2b05fd6c87dcc20f485 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-2-deactivate, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-10T09:16:36.609 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:36 vm09 podman[105658]: 2026-03-10 09:16:36.455260957 +0000 UTC m=+1.856096563 container start e4e59fb3d8d1a2a96ace408bb375d137f55860b873f8a2b05fd6c87dcc20f485 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-2-deactivate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2) 2026-03-10T09:16:36.609 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:36 vm09 podman[105658]: 2026-03-10 09:16:36.458868406 +0000 UTC m=+1.859704012 container attach e4e59fb3d8d1a2a96ace408bb375d137f55860b873f8a2b05fd6c87dcc20f485 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-2-deactivate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3) 2026-03-10T09:16:36.855 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:36 vm09 podman[105437]: 2026-03-10 09:16:36.652087116 +0000 UTC m=+2.319523196 container remove 098843f55167c7e172389a65638e216bab6e90de7771a2eba638f118cbc10698 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mon-a, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-10T09:16:36.856 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:36 vm09 systemd[1]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@mon.a.service: Main process exited, code=exited, status=137/n/a 2026-03-10T09:16:36.856 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:16:36 vm09 podman[102854]: 2026-03-10 09:16:36.810986374 +0000 UTC m=+17.879501561 container init 10ce0f257fc871b44a453a3c39531564dbdc566e4c7b47d5565d110ae6996b4e (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a, org.opencontainers.image.authors=Ceph Release Team , ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) 2026-03-10T09:16:37.391 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:37 vm09 conmon[105745]: conmon e4e59fb3d8d1a2a96ace : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e4e59fb3d8d1a2a96ace408bb375d137f55860b873f8a2b05fd6c87dcc20f485.scope/container/memory.events 2026-03-10T09:16:37.392 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:37 vm09 podman[105658]: 2026-03-10 09:16:37.01027502 +0000 UTC m=+2.411110626 container died e4e59fb3d8d1a2a96ace408bb375d137f55860b873f8a2b05fd6c87dcc20f485 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-2-deactivate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid) 2026-03-10T09:16:37.392 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:16:36 vm09 podman[102854]: 2026-03-10 09:16:36.896830732 +0000 UTC m=+17.965345897 container start 10ce0f257fc871b44a453a3c39531564dbdc566e4c7b47d5565d110ae6996b4e (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223) 2026-03-10T09:16:37.392 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:16:36 vm09 bash[102854]: 10ce0f257fc871b44a453a3c39531564dbdc566e4c7b47d5565d110ae6996b4e 2026-03-10T09:16:37.392 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:16:37 vm09 systemd[1]: Started Ceph mgr.a for 349a7c12-1c61-11f1-8c28-6d0db3d11b76. 2026-03-10T09:16:37.793 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:37 vm09 podman[105658]: 2026-03-10 09:16:37.607560552 +0000 UTC m=+3.008396158 container remove e4e59fb3d8d1a2a96ace408bb375d137f55860b873f8a2b05fd6c87dcc20f485 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-2-deactivate, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, CEPH_REF=squid, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-10T09:16:38.240 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:37 vm09 systemd[1]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@osd.2.service: Failed with result 'exit-code'. 2026-03-10T09:16:38.241 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:37 vm09 systemd[1]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@osd.2.service: Consumed 5.108s CPU time. 2026-03-10T09:16:38.639 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:38 vm09 systemd[1]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@mon.a.service: Failed with result 'exit-code'. 2026-03-10T09:16:38.639 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:38 vm09 systemd[1]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@mon.a.service: Consumed 5.464s CPU time. 2026-03-10T09:16:39.045 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:38 vm09 podman[107009]: 2026-03-10 09:16:38.54746223 +0000 UTC m=+0.208394161 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T09:16:39.045 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:38 vm09 podman[107009]: 2026-03-10 09:16:38.712175156 +0000 UTC m=+0.373107087 container create 5197c6c086ccf41f6aa9cbf8aaa10755232ca513996dfe9a20cbf8246efd9955 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-1-deactivate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T09:16:39.045 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:38 vm09 podman[107009]: 2026-03-10 09:16:38.821950801 +0000 UTC m=+0.482882733 container init 5197c6c086ccf41f6aa9cbf8aaa10755232ca513996dfe9a20cbf8246efd9955 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-1-deactivate, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-10T09:16:39.045 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:38 vm09 podman[107009]: 2026-03-10 09:16:38.832832087 +0000 UTC m=+0.493764018 container start 5197c6c086ccf41f6aa9cbf8aaa10755232ca513996dfe9a20cbf8246efd9955 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-1-deactivate, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True) 2026-03-10T09:16:39.045 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:38 vm09 podman[107009]: 2026-03-10 09:16:38.848938487 +0000 UTC m=+0.509870418 container attach 5197c6c086ccf41f6aa9cbf8aaa10755232ca513996dfe9a20cbf8246efd9955 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-1-deactivate, ceph=True, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default) 2026-03-10T09:16:39.345 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:39 vm09 conmon[107161]: conmon 5197c6c086ccf41f6aa9 : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5197c6c086ccf41f6aa9cbf8aaa10755232ca513996dfe9a20cbf8246efd9955.scope/container/memory.events 2026-03-10T09:16:39.345 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:39 vm09 podman[107009]: 2026-03-10 09:16:39.274627456 +0000 UTC m=+0.935559387 container died 5197c6c086ccf41f6aa9cbf8aaa10755232ca513996dfe9a20cbf8246efd9955 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-1-deactivate, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-10T09:16:39.346 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:39 vm09 podman[107199]: 2026-03-10 09:16:39.045638961 +0000 UTC m=+0.124576504 container create 3045ed1dcbdc6046941ca5d1326d687a8b0f219da8f3ee1637b73820f0599040 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-0-deactivate, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) 2026-03-10T09:16:39.346 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:39 vm09 podman[107199]: 2026-03-10 09:16:38.994775191 +0000 UTC m=+0.073712745 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T09:16:39.346 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:39 vm09 podman[107199]: 2026-03-10 09:16:39.1894521 +0000 UTC m=+0.268389643 container init 3045ed1dcbdc6046941ca5d1326d687a8b0f219da8f3ee1637b73820f0599040 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-0-deactivate, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-10T09:16:39.346 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:39 vm09 podman[107199]: 2026-03-10 09:16:39.195800705 +0000 UTC m=+0.274738248 container start 3045ed1dcbdc6046941ca5d1326d687a8b0f219da8f3ee1637b73820f0599040 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-0-deactivate, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-10T09:16:39.346 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:39 vm09 podman[107199]: 2026-03-10 09:16:39.198391732 +0000 UTC m=+0.277329275 container attach 3045ed1dcbdc6046941ca5d1326d687a8b0f219da8f3ee1637b73820f0599040 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-0-deactivate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T09:16:39.640 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:39 vm09 podman[107009]: 2026-03-10 09:16:39.376854259 +0000 UTC m=+1.037786190 container remove 5197c6c086ccf41f6aa9cbf8aaa10755232ca513996dfe9a20cbf8246efd9955 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-1-deactivate, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3) 2026-03-10T09:16:39.640 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:39 vm09 systemd[1]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@osd.1.service: Failed with result 'exit-code'. 2026-03-10T09:16:39.640 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:39 vm09 systemd[1]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@osd.1.service: Unit process 107161 (conmon) remains running after unit stopped. 2026-03-10T09:16:39.640 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:39 vm09 systemd[1]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@osd.1.service: Consumed 5.578s CPU time, 84.5M memory peak. 2026-03-10T09:16:40.140 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:39 vm09 conmon[107277]: conmon 3045ed1dcbdc6046941c : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3045ed1dcbdc6046941ca5d1326d687a8b0f219da8f3ee1637b73820f0599040.scope/container/memory.events 2026-03-10T09:16:40.140 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:39 vm09 podman[107199]: 2026-03-10 09:16:39.75888122 +0000 UTC m=+0.837818763 container died 3045ed1dcbdc6046941ca5d1326d687a8b0f219da8f3ee1637b73820f0599040 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-0-deactivate, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, CEPH_REF=squid) 2026-03-10T09:16:40.140 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:39 vm09 podman[107199]: 2026-03-10 09:16:39.814629516 +0000 UTC m=+0.893567059 container remove 3045ed1dcbdc6046941ca5d1326d687a8b0f219da8f3ee1637b73820f0599040 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-0-deactivate, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-10T09:16:40.140 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:39 vm09 systemd[1]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@osd.0.service: Failed with result 'exit-code'. 2026-03-10T09:16:40.140 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:39 vm09 systemd[1]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@osd.0.service: Unit process 107277 (conmon) remains running after unit stopped. 2026-03-10T09:16:40.140 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:39 vm09 systemd[1]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@osd.0.service: Unit process 107414 (podman) remains running after unit stopped. 2026-03-10T09:16:40.140 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:39 vm09 systemd[1]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@osd.0.service: Consumed 5.143s CPU time, 83.4M memory peak. 2026-03-10T09:16:48.407 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:48 vm09 systemd[1]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@mon.a.service: Scheduled restart job, restart counter is at 1. 2026-03-10T09:16:48.407 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:48 vm09 systemd[1]: Stopped Ceph mon.a for 349a7c12-1c61-11f1-8c28-6d0db3d11b76. 2026-03-10T09:16:48.407 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:48 vm09 systemd[1]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@mon.a.service: Consumed 5.464s CPU time. 2026-03-10T09:16:48.407 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:48 vm09 systemd[1]: Starting Ceph mon.a for 349a7c12-1c61-11f1-8c28-6d0db3d11b76... 2026-03-10T09:16:48.407 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:48 vm09 systemd[1]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@osd.2.service: Scheduled restart job, restart counter is at 1. 2026-03-10T09:16:48.407 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:48 vm09 systemd[1]: Stopped Ceph osd.2 for 349a7c12-1c61-11f1-8c28-6d0db3d11b76. 2026-03-10T09:16:48.407 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:48 vm09 systemd[1]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@osd.2.service: Consumed 5.108s CPU time. 2026-03-10T09:16:48.407 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:48 vm09 systemd[1]: Starting Ceph osd.2 for 349a7c12-1c61-11f1-8c28-6d0db3d11b76... 2026-03-10T09:16:48.407 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:48 vm09 podman[118135]: 2026-03-10 09:16:48.358596524 +0000 UTC m=+0.035943790 container create 0a3075f82e678093b945b81d75cabc468695330b0c25e46556f1b0a89ab4abd6 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-2-activate, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-10T09:16:48.577 INFO:tasks.workunit.client.0.vm09.stderr:+ for i in ${ISCSI_CONT_IDS} 2026-03-10T09:16:48.578 INFO:tasks.workunit.client.0.vm09.stderr:++ sudo podman exec 5c44fdcc3013 /bin/sh -c 'ps -ef | grep -c sleep' 2026-03-10T09:16:48.648 INFO:tasks.workunit.client.0.vm09.stderr:Error: no container with name or ID "5c44fdcc3013" found: no such container 2026-03-10T09:16:48.654 INFO:tasks.workunit.client.0.vm09.stderr:+ SLEEP_COUNT= 2026-03-10T09:16:48.656 DEBUG:teuthology.orchestra.run:got remote process result: 125 2026-03-10T09:16:48.656 INFO:tasks.workunit:Stopping ['cephadm/test_iscsi_pids_limit.sh', 'cephadm/test_iscsi_etc_hosts.sh', 'cephadm/test_iscsi_setup.sh'] on client.0... 2026-03-10T09:16:48.656 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/clone.client.0 2026-03-10T09:16:48.665 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:48 vm09 podman[118135]: 2026-03-10 09:16:48.429826353 +0000 UTC m=+0.107173639 container init 0a3075f82e678093b945b81d75cabc468695330b0c25e46556f1b0a89ab4abd6 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-2-activate, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=squid, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-10T09:16:48.665 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:48 vm09 podman[118135]: 2026-03-10 09:16:48.43713265 +0000 UTC m=+0.114479926 container start 0a3075f82e678093b945b81d75cabc468695330b0c25e46556f1b0a89ab4abd6 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-2-activate, ceph=True, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T09:16:48.665 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:48 vm09 podman[118135]: 2026-03-10 09:16:48.34266475 +0000 UTC m=+0.020012026 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T09:16:48.665 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:48 vm09 podman[118135]: 2026-03-10 09:16:48.442603795 +0000 UTC m=+0.119951071 container attach 0a3075f82e678093b945b81d75cabc468695330b0c25e46556f1b0a89ab4abd6 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-2-activate, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True) 2026-03-10T09:16:48.665 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:48 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-2-activate[118196]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T09:16:48.665 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:48 vm09 bash[118135]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T09:16:48.937 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:48 vm09 podman[118472]: 2026-03-10 09:16:48.916371775 +0000 UTC m=+0.107484290 container create cd251e4fc8619d5ee73521eabdffe700637d7520da54fb3defda58ac8e7154e6 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mon-a, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, org.label-schema.vendor=CentOS) 2026-03-10T09:16:48.937 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:48 vm09 podman[118472]: 2026-03-10 09:16:48.84082489 +0000 UTC m=+0.031937405 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T09:16:48.937 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:48 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-2-activate[118196]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T09:16:48.937 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:48 vm09 bash[118135]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T09:16:49.203 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 podman[118472]: 2026-03-10 09:16:49.114503878 +0000 UTC m=+0.305616403 container init cd251e4fc8619d5ee73521eabdffe700637d7520da54fb3defda58ac8e7154e6 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mon-a, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2) 2026-03-10T09:16:49.203 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 podman[118472]: 2026-03-10 09:16:49.15290228 +0000 UTC m=+0.344014795 container start cd251e4fc8619d5ee73521eabdffe700637d7520da54fb3defda58ac8e7154e6 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mon-a, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS) 2026-03-10T09:16:49.203 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 bash[118472]: cd251e4fc8619d5ee73521eabdffe700637d7520da54fb3defda58ac8e7154e6 2026-03-10T09:16:49.203 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 systemd[1]: Started Ceph mon.a for 349a7c12-1c61-11f1-8c28-6d0db3d11b76. 2026-03-10T09:16:49.291 ERROR:teuthology.run_tasks:Saw exception from tasks. Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 105, in run_tasks manager = run_one_task(taskname, ctx=ctx, config=config) File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 83, in run_one_task return task(**kwargs) File "/home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks/workunit.py", line 125, in task with parallel() as p: File "/home/teuthos/teuthology/teuthology/parallel.py", line 84, in __exit__ for result in self: File "/home/teuthos/teuthology/teuthology/parallel.py", line 98, in __next__ resurrect_traceback(result) File "/home/teuthos/teuthology/teuthology/parallel.py", line 30, in resurrect_traceback raise exc.exc_info[1] File "/home/teuthos/teuthology/teuthology/parallel.py", line 23, in capture_traceback return func(*args, **kwargs) File "/home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks/workunit.py", line 433, in _run_tests remote.run( File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed (workunit test cephadm/test_iscsi_pids_limit.sh) on vm09 with status 125: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_iscsi_pids_limit.sh' 2026-03-10T09:16:49.292 DEBUG:teuthology.run_tasks:Unwinding manager cephadm 2026-03-10T09:16:49.294 INFO:tasks.cephadm:Teardown begin 2026-03-10T09:16:49.295 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T09:16:49.379 INFO:tasks.cephadm:Disabling cephadm mgr module 2026-03-10T09:16:49.379 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 349a7c12-1c61-11f1-8c28-6d0db3d11b76 -- ceph mgr module disable cephadm 2026-03-10T09:16:49.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: set uid:gid to 167:167 (ceph:ceph) 2026-03-10T09:16:49.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 2 2026-03-10T09:16:49.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: pidfile_write: ignore empty --pid-file 2026-03-10T09:16:49.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: load: jerasure load: lrc 2026-03-10T09:16:49.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: RocksDB version: 7.9.2 2026-03-10T09:16:49.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Git sha 0 2026-03-10T09:16:49.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Compile date 2026-02-25 18:11:04 2026-03-10T09:16:49.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: DB SUMMARY 2026-03-10T09:16:49.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: DB Session ID: K9OZI4109OKEF4FRB65G 2026-03-10T09:16:49.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: CURRENT file: CURRENT 2026-03-10T09:16:49.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: IDENTITY file: IDENTITY 2026-03-10T09:16:49.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: MANIFEST file: MANIFEST-000015 size: 281 Bytes 2026-03-10T09:16:49.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 2, files: 000008.sst 000013.sst 2026-03-10T09:16:49.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 000014.log size: 3926821 ; 2026-03-10T09:16:49.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.error_if_exists: 0 2026-03-10T09:16:49.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.create_if_missing: 0 2026-03-10T09:16:49.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.paranoid_checks: 1 2026-03-10T09:16:49.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.flush_verify_memtable_count: 1 2026-03-10T09:16:49.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T09:16:49.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-10T09:16:49.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.env: 0x55dad2ad0dc0 2026-03-10T09:16:49.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.fs: PosixFileSystem 2026-03-10T09:16:49.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.info_log: 0x55dad4f3d7e0 2026-03-10T09:16:49.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.max_file_opening_threads: 16 2026-03-10T09:16:49.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.statistics: (nil) 2026-03-10T09:16:49.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.use_fsync: 0 2026-03-10T09:16:49.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.max_log_file_size: 0 2026-03-10T09:16:49.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T09:16:49.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T09:16:49.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.keep_log_file_num: 1000 2026-03-10T09:16:49.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.recycle_log_file_num: 0 2026-03-10T09:16:49.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.allow_fallocate: 1 2026-03-10T09:16:49.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.allow_mmap_reads: 0 2026-03-10T09:16:49.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.allow_mmap_writes: 0 2026-03-10T09:16:49.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.use_direct_reads: 0 2026-03-10T09:16:49.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T09:16:49.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.create_missing_column_families: 0 2026-03-10T09:16:49.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.db_log_dir: 2026-03-10T09:16:49.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.wal_dir: 2026-03-10T09:16:49.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T09:16:49.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T09:16:49.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T09:16:49.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T09:16:49.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T09:16:49.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T09:16:49.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.advise_random_on_open: 1 2026-03-10T09:16:49.640 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.db_write_buffer_size: 0 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.write_buffer_manager: 0x55dad4f41900 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.rate_limiter: (nil) 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.wal_recovery_mode: 2 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.enable_thread_tracking: 0 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.enable_pipelined_write: 0 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.unordered_write: 0 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.row_cache: None 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.wal_filter: None 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.allow_ingest_behind: 0 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.two_write_queues: 0 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.manual_wal_flush: 0 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.wal_compression: 0 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.atomic_flush: 0 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.log_readahead_size: 0 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.best_efforts_recovery: 0 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.allow_data_in_errors: 0 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.db_host_id: __hostname__ 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.enforce_single_del_contracts: true 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.max_background_jobs: 2 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.max_background_compactions: -1 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.max_subcompactions: 1 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.max_total_wal_size: 0 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.max_open_files: -1 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.bytes_per_sync: 0 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.compaction_readahead_size: 0 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.max_background_flushes: -1 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Compression algorithms supported: 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: kZSTD supported: 0 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: kXpressCompression supported: 0 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: kBZip2Compression supported: 0 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: kLZ4Compression supported: 1 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: kZlibCompression supported: 1 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: kLZ4HCCompression supported: 1 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: kSnappyCompression supported: 1 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: DMutex implementation: pthread_mutex_t 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000015 2026-03-10T09:16:49.641 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-10T09:16:49.642 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T09:16:49.642 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.merge_operator: 2026-03-10T09:16:49.642 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.compaction_filter: None 2026-03-10T09:16:49.642 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.compaction_filter_factory: None 2026-03-10T09:16:49.642 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.sst_partitioner_factory: None 2026-03-10T09:16:49.642 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T09:16:49.642 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T09:16:49.642 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55dad4f3c320) 2026-03-10T09:16:49.642 INFO:journalctl@ceph.mon.a.vm09.stdout: cache_index_and_filter_blocks: 1 2026-03-10T09:16:49.642 INFO:journalctl@ceph.mon.a.vm09.stdout: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T09:16:49.642 INFO:journalctl@ceph.mon.a.vm09.stdout: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T09:16:49.642 INFO:journalctl@ceph.mon.a.vm09.stdout: pin_top_level_index_and_filter: 1 2026-03-10T09:16:49.642 INFO:journalctl@ceph.mon.a.vm09.stdout: index_type: 0 2026-03-10T09:16:49.642 INFO:journalctl@ceph.mon.a.vm09.stdout: data_block_index_type: 0 2026-03-10T09:16:49.642 INFO:journalctl@ceph.mon.a.vm09.stdout: index_shortening: 1 2026-03-10T09:16:49.642 INFO:journalctl@ceph.mon.a.vm09.stdout: data_block_hash_table_util_ratio: 0.750000 2026-03-10T09:16:49.642 INFO:journalctl@ceph.mon.a.vm09.stdout: checksum: 4 2026-03-10T09:16:49.642 INFO:journalctl@ceph.mon.a.vm09.stdout: no_block_cache: 0 2026-03-10T09:16:49.642 INFO:journalctl@ceph.mon.a.vm09.stdout: block_cache: 0x55dad4f61350 2026-03-10T09:16:49.642 INFO:journalctl@ceph.mon.a.vm09.stdout: block_cache_name: BinnedLRUCache 2026-03-10T09:16:49.642 INFO:journalctl@ceph.mon.a.vm09.stdout: block_cache_options: 2026-03-10T09:16:49.642 INFO:journalctl@ceph.mon.a.vm09.stdout: capacity : 536870912 2026-03-10T09:16:49.642 INFO:journalctl@ceph.mon.a.vm09.stdout: num_shard_bits : 4 2026-03-10T09:16:49.642 INFO:journalctl@ceph.mon.a.vm09.stdout: strict_capacity_limit : 0 2026-03-10T09:16:49.642 INFO:journalctl@ceph.mon.a.vm09.stdout: high_pri_pool_ratio: 0.000 2026-03-10T09:16:49.642 INFO:journalctl@ceph.mon.a.vm09.stdout: block_cache_compressed: (nil) 2026-03-10T09:16:49.642 INFO:journalctl@ceph.mon.a.vm09.stdout: persistent_cache: (nil) 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout: block_size: 4096 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout: block_size_deviation: 10 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout: block_restart_interval: 16 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout: index_block_restart_interval: 1 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout: metadata_block_size: 4096 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout: partition_filters: 0 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout: use_delta_encoding: 1 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout: filter_policy: bloomfilter 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout: whole_key_filtering: 1 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout: verify_compression: 0 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout: read_amp_bytes_per_bit: 0 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout: format_version: 5 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout: enable_index_compression: 1 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout: block_align: 0 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout: max_auto_readahead_size: 262144 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout: prepopulate_block_cache: 0 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout: initial_auto_readahead_size: 8192 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout: num_file_reads_for_auto_readahead: 2 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.write_buffer_size: 33554432 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.max_write_buffer_number: 2 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.compression: NoCompression 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.bottommost_compression: Disabled 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.prefix_extractor: nullptr 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.num_levels: 7 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.compression_opts.level: 32767 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.compression_opts.strategy: 0 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.compression_opts.enabled: false 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.target_file_size_base: 67108864 2026-03-10T09:16:49.643 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.arena_block_size: 1048576 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.disable_auto_compactions: 0 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.inplace_update_support: 0 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.bloom_locality: 0 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.max_successive_merges: 0 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.paranoid_file_checks: 0 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.force_consistency_checks: 1 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.report_bg_io_stats: 0 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.ttl: 2592000 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.enable_blob_files: false 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.min_blob_size: 0 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.blob_file_size: 268435456 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-10T09:16:49.644 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.blob_file_starting_level: 0 2026-03-10T09:16:49.645 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-10T09:16:49.645 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000015 succeeded,manifest_file_number is 15, next_file_number is 17, last_sequence is 225, log_number is 10,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 10 2026-03-10T09:16:49.645 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 10 2026-03-10T09:16:49.645 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 7eb0c5ee-6cc1-49e4-9b8d-70ca3c146bfd 2026-03-10T09:16:49.645 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773134209273308, "job": 1, "event": "recovery_started", "wal_files": [14]} 2026-03-10T09:16:49.645 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #14 mode 2 2026-03-10T09:16:49.645 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773134209353443, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 18, "file_size": 3505143, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 228, "largest_seqno": 2941, "table_properties": {"data_size": 3494074, "index_size": 6828, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3205, "raw_key_size": 29049, "raw_average_key_size": 23, "raw_value_size": 3468979, "raw_average_value_size": 2761, "num_data_blocks": 322, "num_entries": 1256, "num_filter_entries": 1256, "num_deletions": 1, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773134209, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7eb0c5ee-6cc1-49e4-9b8d-70ca3c146bfd", "db_session_id": "K9OZI4109OKEF4FRB65G", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}} 2026-03-10T09:16:49.645 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773134209353547, "job": 1, "event": "recovery_finished"} 2026-03-10T09:16:49.645 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: [db/version_set.cc:5047] Creating manifest 20 2026-03-10T09:16:49.645 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-a/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-10T09:16:49.645 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55dad4f62e00 2026-03-10T09:16:49.645 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: DB pointer 0x55dad506c000 2026-03-10T09:16:49.645 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-10T09:16:49.645 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-10T09:16:49.645 INFO:journalctl@ceph.mon.a.vm09.stdout: ** DB Stats ** 2026-03-10T09:16:49.645 INFO:journalctl@ceph.mon.a.vm09.stdout: Uptime(secs): 0.1 total, 0.1 interval 2026-03-10T09:16:49.645 INFO:journalctl@ceph.mon.a.vm09.stdout: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-10T09:16:49.645 INFO:journalctl@ceph.mon.a.vm09.stdout: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T09:16:49.645 INFO:journalctl@ceph.mon.a.vm09.stdout: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T09:16:49.645 INFO:journalctl@ceph.mon.a.vm09.stdout: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-10T09:16:49.645 INFO:journalctl@ceph.mon.a.vm09.stdout: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T09:16:49.645 INFO:journalctl@ceph.mon.a.vm09.stdout: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T09:16:49.645 INFO:journalctl@ceph.mon.a.vm09.stdout: 2026-03-10T09:16:49.645 INFO:journalctl@ceph.mon.a.vm09.stdout: ** Compaction Stats [default] ** 2026-03-10T09:16:49.645 INFO:journalctl@ceph.mon.a.vm09.stdout: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T09:16:49.645 INFO:journalctl@ceph.mon.a.vm09.stdout: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-10T09:16:49.645 INFO:journalctl@ceph.mon.a.vm09.stdout: L0 3/0 3.41 MB 0.8 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 71.9 0.05 0.00 1 0.046 0 0 0.0 0.0 2026-03-10T09:16:49.645 INFO:journalctl@ceph.mon.a.vm09.stdout: Sum 3/0 3.41 MB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 71.9 0.05 0.00 1 0.046 0 0 0.0 0.0 2026-03-10T09:16:49.645 INFO:journalctl@ceph.mon.a.vm09.stdout: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 71.9 0.05 0.00 1 0.046 0 0 0.0 0.0 2026-03-10T09:16:49.645 INFO:journalctl@ceph.mon.a.vm09.stdout: 2026-03-10T09:16:49.645 INFO:journalctl@ceph.mon.a.vm09.stdout: ** Compaction Stats [default] ** 2026-03-10T09:16:49.645 INFO:journalctl@ceph.mon.a.vm09.stdout: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T09:16:49.645 INFO:journalctl@ceph.mon.a.vm09.stdout: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-10T09:16:49.645 INFO:journalctl@ceph.mon.a.vm09.stdout: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 71.9 0.05 0.00 1 0.046 0 0 0.0 0.0 2026-03-10T09:16:49.645 INFO:journalctl@ceph.mon.a.vm09.stdout: 2026-03-10T09:16:49.645 INFO:journalctl@ceph.mon.a.vm09.stdout: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-10T09:16:49.645 INFO:journalctl@ceph.mon.a.vm09.stdout: 2026-03-10T09:16:49.645 INFO:journalctl@ceph.mon.a.vm09.stdout: Uptime(secs): 0.1 total, 0.1 interval 2026-03-10T09:16:49.645 INFO:journalctl@ceph.mon.a.vm09.stdout: Flush(GB): cumulative 0.003, interval 0.003 2026-03-10T09:16:49.645 INFO:journalctl@ceph.mon.a.vm09.stdout: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-10T09:16:49.645 INFO:journalctl@ceph.mon.a.vm09.stdout: AddFile(Total Files): cumulative 0, interval 0 2026-03-10T09:16:49.645 INFO:journalctl@ceph.mon.a.vm09.stdout: AddFile(L0 Files): cumulative 0, interval 0 2026-03-10T09:16:49.645 INFO:journalctl@ceph.mon.a.vm09.stdout: AddFile(Keys): cumulative 0, interval 0 2026-03-10T09:16:49.646 INFO:journalctl@ceph.mon.a.vm09.stdout: Cumulative compaction: 0.00 GB write, 31.79 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T09:16:49.646 INFO:journalctl@ceph.mon.a.vm09.stdout: Interval compaction: 0.00 GB write, 31.79 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T09:16:49.646 INFO:journalctl@ceph.mon.a.vm09.stdout: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-10T09:16:49.646 INFO:journalctl@ceph.mon.a.vm09.stdout: Block cache BinnedLRUCache@0x55dad4f61350#2 capacity: 512.00 MB usage: 41.61 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1e-05 secs_since: 0 2026-03-10T09:16:49.646 INFO:journalctl@ceph.mon.a.vm09.stdout: Block cache entry stats(count,size,portion): DataBlock(4,29.19 KB,0.00556707%) FilterBlock(3,3.98 KB,0.000759959%) IndexBlock(3,8.44 KB,0.00160933%) Misc(1,0.00 KB,0%) 2026-03-10T09:16:49.646 INFO:journalctl@ceph.mon.a.vm09.stdout: 2026-03-10T09:16:49.646 INFO:journalctl@ceph.mon.a.vm09.stdout: ** File Read Latency Histogram By Level [default] ** 2026-03-10T09:16:49.646 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: starting mon.a rank 0 at public addrs [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] at bind addrs [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon_data /var/lib/ceph/mon/ceph-a fsid 349a7c12-1c61-11f1-8c28-6d0db3d11b76 2026-03-10T09:16:49.646 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: mon.a@-1(???) e1 preinit fsid 349a7c12-1c61-11f1-8c28-6d0db3d11b76 2026-03-10T09:16:49.646 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: mon.a@-1(???).mds e1 new map 2026-03-10T09:16:49.646 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: mon.a@-1(???).mds e1 print_map 2026-03-10T09:16:49.646 INFO:journalctl@ceph.mon.a.vm09.stdout: e1 2026-03-10T09:16:49.646 INFO:journalctl@ceph.mon.a.vm09.stdout: btime 2026-03-10T09:12:42:416288+0000 2026-03-10T09:16:49.646 INFO:journalctl@ceph.mon.a.vm09.stdout: enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T09:16:49.646 INFO:journalctl@ceph.mon.a.vm09.stdout: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-10T09:16:49.646 INFO:journalctl@ceph.mon.a.vm09.stdout: legacy client fscid: -1 2026-03-10T09:16:49.646 INFO:journalctl@ceph.mon.a.vm09.stdout: 2026-03-10T09:16:49.646 INFO:journalctl@ceph.mon.a.vm09.stdout: No filesystems configured 2026-03-10T09:16:49.646 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: mon.a@-1(???).osd e36 crush map has features 3314933000852226048, adjusting msgr requires 2026-03-10T09:16:49.646 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: mon.a@-1(???).osd e36 crush map has features 288514051259236352, adjusting msgr requires 2026-03-10T09:16:49.646 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: mon.a@-1(???).osd e36 crush map has features 288514051259236352, adjusting msgr requires 2026-03-10T09:16:49.646 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: mon.a@-1(???).osd e36 crush map has features 288514051259236352, adjusting msgr requires 2026-03-10T09:16:49.646 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: mon.a@-1(???).paxosservice(auth 1..8) refresh upgraded, format 0 -> 3 2026-03-10T09:16:49.646 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: mon.a@-1(???).mgr e0 loading version 15 2026-03-10T09:16:49.646 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: mon.a@-1(???).mgr e15 active server: (0) 2026-03-10T09:16:49.646 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-mon[118554]: mon.a@-1(???).mgr e15 mkfs or daemon transitioned to available, loading commands 2026-03-10T09:16:49.646 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:49 vm09 systemd[1]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@osd.1.service: Scheduled restart job, restart counter is at 1. 2026-03-10T09:16:49.646 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:49 vm09 systemd[1]: Stopped Ceph osd.1 for 349a7c12-1c61-11f1-8c28-6d0db3d11b76. 2026-03-10T09:16:49.646 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:49 vm09 systemd[1]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@osd.1.service: Consumed 5.578s CPU time, 84.5M memory peak. 2026-03-10T09:16:49.646 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:49 vm09 systemd[1]: Starting Ceph osd.1 for 349a7c12-1c61-11f1-8c28-6d0db3d11b76... 2026-03-10T09:16:49.798 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/mon.a/config 2026-03-10T09:16:49.828 INFO:teuthology.orchestra.run.vm09.stderr:Error: statfs /etc/ceph/ceph.client.admin.keyring: no such file or directory 2026-03-10T09:16:49.880 DEBUG:teuthology.orchestra.run:got remote process result: 125 2026-03-10T09:16:49.881 INFO:tasks.cephadm:Cleaning up testdir ceph.* files... 2026-03-10T09:16:49.881 DEBUG:teuthology.orchestra.run.vm09:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-10T09:16:49.916 INFO:tasks.cephadm:Stopping all daemons... 2026-03-10T09:16:49.916 INFO:tasks.cephadm.mon.a:Stopping mon.a... 2026-03-10T09:16:49.916 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@mon.a 2026-03-10T09:16:49.931 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:49 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-2-activate[118196]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-10T09:16:49.931 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:49 vm09 bash[118135]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-10T09:16:49.931 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:49 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-2-activate[118196]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T09:16:49.931 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:49 vm09 bash[118135]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T09:16:49.931 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:49 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-2-activate[118196]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T09:16:49.931 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:49 vm09 bash[118135]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T09:16:49.931 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:49 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-2-activate[118196]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 2026-03-10T09:16:49.931 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:49 vm09 bash[118135]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 2026-03-10T09:16:49.931 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:49 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-2-activate[118196]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-7996d46e-c244-4b9f-ba57-ed2880f2cd32/osd-block-a2664302-47b2-48a9-ac35-65f3bc5a6c6e --path /var/lib/ceph/osd/ceph-2 --no-mon-config 2026-03-10T09:16:49.931 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:49 vm09 bash[118135]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-7996d46e-c244-4b9f-ba57-ed2880f2cd32/osd-block-a2664302-47b2-48a9-ac35-65f3bc5a6c6e --path /var/lib/ceph/osd/ceph-2 --no-mon-config 2026-03-10T09:16:49.932 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:49 vm09 systemd[1]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@osd.0.service: Scheduled restart job, restart counter is at 1. 2026-03-10T09:16:49.932 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:49 vm09 systemd[1]: Stopped Ceph osd.0 for 349a7c12-1c61-11f1-8c28-6d0db3d11b76. 2026-03-10T09:16:49.932 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:49 vm09 systemd[1]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@osd.0.service: Consumed 5.157s CPU time, 83.4M memory peak. 2026-03-10T09:16:49.932 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:49 vm09 systemd[1]: Starting Ceph osd.0 for 349a7c12-1c61-11f1-8c28-6d0db3d11b76... 2026-03-10T09:16:50.182 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:49 vm09 systemd[1]: Stopping Ceph mon.a for 349a7c12-1c61-11f1-8c28-6d0db3d11b76... 2026-03-10T09:16:50.182 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:50 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-2-activate[118196]: Running command: /usr/bin/ln -snf /dev/ceph-7996d46e-c244-4b9f-ba57-ed2880f2cd32/osd-block-a2664302-47b2-48a9-ac35-65f3bc5a6c6e /var/lib/ceph/osd/ceph-2/block 2026-03-10T09:16:50.182 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:50 vm09 bash[118135]: Running command: /usr/bin/ln -snf /dev/ceph-7996d46e-c244-4b9f-ba57-ed2880f2cd32/osd-block-a2664302-47b2-48a9-ac35-65f3bc5a6c6e /var/lib/ceph/osd/ceph-2/block 2026-03-10T09:16:50.182 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:50 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-2-activate[118196]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block 2026-03-10T09:16:50.182 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:50 vm09 bash[118135]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block 2026-03-10T09:16:50.182 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:50 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-2-activate[118196]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2 2026-03-10T09:16:50.182 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:50 vm09 bash[118135]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2 2026-03-10T09:16:50.182 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:50 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-2-activate[118196]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 2026-03-10T09:16:50.182 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:50 vm09 bash[118135]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 2026-03-10T09:16:50.183 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:49 vm09 podman[118863]: 2026-03-10 09:16:49.934141973 +0000 UTC m=+0.054338022 container create 8ab33062a022467a53bd5d4fe6702877bd97cbde503d7b1654f4f8ae597947a4 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-1-activate, io.buildah.version=1.41.3, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-10T09:16:50.183 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:50 vm09 podman[118863]: 2026-03-10 09:16:49.899267764 +0000 UTC m=+0.019463823 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T09:16:50.183 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:50 vm09 podman[118863]: 2026-03-10 09:16:50.019578607 +0000 UTC m=+0.139774656 container init 8ab33062a022467a53bd5d4fe6702877bd97cbde503d7b1654f4f8ae597947a4 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-1-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, CEPH_REF=squid, org.label-schema.vendor=CentOS) 2026-03-10T09:16:50.183 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:50 vm09 podman[118863]: 2026-03-10 09:16:50.032269198 +0000 UTC m=+0.152465247 container start 8ab33062a022467a53bd5d4fe6702877bd97cbde503d7b1654f4f8ae597947a4 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-1-activate, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T09:16:50.183 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:50 vm09 podman[118863]: 2026-03-10 09:16:50.042948805 +0000 UTC m=+0.163144854 container attach 8ab33062a022467a53bd5d4fe6702877bd97cbde503d7b1654f4f8ae597947a4 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-1-activate, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-10T09:16:50.183 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:50 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-1-activate[118941]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T09:16:50.183 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:50 vm09 bash[118863]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T09:16:50.183 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:16:49 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[106554]: 2026-03-10T09:16:49.980+0000 7f2c26118140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T09:16:50.183 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:16:50 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[106554]: 2026-03-10T09:16:50.075+0000 7f2c26118140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T09:16:50.448 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:50 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mon-a[118519]: 2026-03-10T09:16:50.395+0000 7fa79e190640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T09:16:50.449 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:50 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mon-a[118519]: 2026-03-10T09:16:50.395+0000 7fa79e190640 -1 mon.a@0(leader) e1 *** Got Signal Terminated *** 2026-03-10T09:16:50.449 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:50 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-2-activate[118196]: --> ceph-volume lvm activate successful for osd ID: 2 2026-03-10T09:16:50.449 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:50 vm09 bash[118135]: --> ceph-volume lvm activate successful for osd ID: 2 2026-03-10T09:16:50.449 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:50 vm09 podman[118990]: 2026-03-10 09:16:50.298220859 +0000 UTC m=+0.055576496 container died 0a3075f82e678093b945b81d75cabc468695330b0c25e46556f1b0a89ab4abd6 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-10T09:16:50.449 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:50 vm09 podman[118990]: 2026-03-10 09:16:50.388109182 +0000 UTC m=+0.145464819 container remove 0a3075f82e678093b945b81d75cabc468695330b0c25e46556f1b0a89ab4abd6 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-2-activate, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=squid, org.label-schema.vendor=CentOS) 2026-03-10T09:16:50.449 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:50 vm09 podman[119010]: 2026-03-10 09:16:50.427073642 +0000 UTC m=+0.103770223 container create 7941f5f36a3b6a4af27eb69889b9f35953b6ba08cce076afef00cbbe3b3e0a3d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-0-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2) 2026-03-10T09:16:50.449 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:50 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-1-activate[118941]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T09:16:50.449 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:50 vm09 bash[118863]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T09:16:50.789 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:50 vm09 podman[119007]: 2026-03-10 09:16:50.45173247 +0000 UTC m=+0.147402293 container died cd251e4fc8619d5ee73521eabdffe700637d7520da54fb3defda58ac8e7154e6 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mon-a, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-10T09:16:50.789 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:50 vm09 podman[119007]: 2026-03-10 09:16:50.590572376 +0000 UTC m=+0.286242199 container remove cd251e4fc8619d5ee73521eabdffe700637d7520da54fb3defda58ac8e7154e6 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mon-a, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-10T09:16:50.789 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:50 vm09 bash[119007]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mon-a 2026-03-10T09:16:50.789 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:50 vm09 podman[119010]: 2026-03-10 09:16:50.383932017 +0000 UTC m=+0.060628598 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T09:16:50.789 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:50 vm09 podman[119010]: 2026-03-10 09:16:50.519917654 +0000 UTC m=+0.196614224 container init 7941f5f36a3b6a4af27eb69889b9f35953b6ba08cce076afef00cbbe3b3e0a3d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-0-activate, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-10T09:16:50.789 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:50 vm09 podman[119010]: 2026-03-10 09:16:50.532507105 +0000 UTC m=+0.209203675 container start 7941f5f36a3b6a4af27eb69889b9f35953b6ba08cce076afef00cbbe3b3e0a3d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-0-activate, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-10T09:16:50.789 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:50 vm09 podman[119010]: 2026-03-10 09:16:50.563898798 +0000 UTC m=+0.240595379 container attach 7941f5f36a3b6a4af27eb69889b9f35953b6ba08cce076afef00cbbe3b3e0a3d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-0-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-10T09:16:50.871 DEBUG:teuthology.orchestra.run.vm09:> sudo pkill -f 'journalctl -f -n 0 -u ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@mon.a.service' 2026-03-10T09:16:51.065 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:50 vm09 systemd[1]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@mon.a.service: Deactivated successfully. 2026-03-10T09:16:51.065 INFO:journalctl@ceph.mon.a.vm09.stdout:Mar 10 09:16:50 vm09 systemd[1]: Stopped Ceph mon.a for 349a7c12-1c61-11f1-8c28-6d0db3d11b76. 2026-03-10T09:16:51.065 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:50 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-0-activate[119064]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T09:16:51.065 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:50 vm09 bash[119010]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T09:16:51.065 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:50 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-0-activate[119064]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T09:16:51.065 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:50 vm09 bash[119010]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T09:16:51.065 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:50 vm09 podman[119153]: 2026-03-10 09:16:50.930304959 +0000 UTC m=+0.081880723 container create b89bd504ce8a2a4e313924d177147a8abeb41639e703eb9872653e84d3f79314 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-10T09:16:51.065 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:50 vm09 podman[119153]: 2026-03-10 09:16:50.899312282 +0000 UTC m=+0.050888065 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T09:16:51.065 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:51 vm09 podman[119153]: 2026-03-10 09:16:51.014893207 +0000 UTC m=+0.166468992 container init b89bd504ce8a2a4e313924d177147a8abeb41639e703eb9872653e84d3f79314 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-2, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0) 2026-03-10T09:16:51.065 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:51 vm09 podman[119153]: 2026-03-10 09:16:51.017821083 +0000 UTC m=+0.169396857 container start b89bd504ce8a2a4e313924d177147a8abeb41639e703eb9872653e84d3f79314 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-10T09:16:51.066 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:51 vm09 bash[119153]: b89bd504ce8a2a4e313924d177147a8abeb41639e703eb9872653e84d3f79314 2026-03-10T09:16:51.377 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:51 vm09 systemd[1]: Started Ceph osd.2 for 349a7c12-1c61-11f1-8c28-6d0db3d11b76. 2026-03-10T09:16:51.640 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:16:51 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a[106554]: 2026-03-10T09:16:51.379+0000 7f2c26118140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T09:16:51.641 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:51 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-1-activate[118941]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-10T09:16:51.641 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:51 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-1-activate[118941]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T09:16:51.641 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:51 vm09 bash[118863]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-10T09:16:51.641 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:51 vm09 bash[118863]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T09:16:51.690 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T09:16:51.691 INFO:tasks.cephadm.mon.a:Stopped mon.a 2026-03-10T09:16:51.691 INFO:tasks.cephadm.mgr.a:Stopping mgr.a... 2026-03-10T09:16:51.691 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@mgr.a 2026-03-10T09:16:51.977 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:16:51 vm09 systemd[1]: Stopping Ceph mgr.a for 349a7c12-1c61-11f1-8c28-6d0db3d11b76... 2026-03-10T09:16:51.977 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:16:51 vm09 podman[119443]: 2026-03-10 09:16:51.979504951 +0000 UTC m=+0.050694264 container died 10ce0f257fc871b44a453a3c39531564dbdc566e4c7b47d5565d110ae6996b4e (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T09:16:51.977 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:51 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-1-activate[118941]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T09:16:51.977 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:51 vm09 bash[118863]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T09:16:51.977 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:51 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-1-activate[118941]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 2026-03-10T09:16:51.977 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:51 vm09 bash[118863]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 2026-03-10T09:16:51.977 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:51 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-1-activate[118941]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-f514be33-0827-4379-97f0-47508e746cea/osd-block-9a4cc04f-8019-4083-b136-60d601e0d497 --path /var/lib/ceph/osd/ceph-1 --no-mon-config 2026-03-10T09:16:51.977 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:51 vm09 bash[118863]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-f514be33-0827-4379-97f0-47508e746cea/osd-block-9a4cc04f-8019-4083-b136-60d601e0d497 --path /var/lib/ceph/osd/ceph-1 --no-mon-config 2026-03-10T09:16:52.113 DEBUG:teuthology.orchestra.run.vm09:> sudo pkill -f 'journalctl -f -n 0 -u ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@mgr.a.service' 2026-03-10T09:16:52.274 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:52 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-0-activate[119064]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-10T09:16:52.274 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:52 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-0-activate[119064]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T09:16:52.274 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:52 vm09 bash[119010]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-10T09:16:52.274 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:52 vm09 bash[119010]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T09:16:52.274 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:52 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-0-activate[119064]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T09:16:52.274 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:52 vm09 bash[119010]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T09:16:52.276 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:52 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-1-activate[118941]: Running command: /usr/bin/ln -snf /dev/ceph-f514be33-0827-4379-97f0-47508e746cea/osd-block-9a4cc04f-8019-4083-b136-60d601e0d497 /var/lib/ceph/osd/ceph-1/block 2026-03-10T09:16:52.276 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:52 vm09 bash[118863]: Running command: /usr/bin/ln -snf /dev/ceph-f514be33-0827-4379-97f0-47508e746cea/osd-block-9a4cc04f-8019-4083-b136-60d601e0d497 /var/lib/ceph/osd/ceph-1/block 2026-03-10T09:16:52.277 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:52 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-1-activate[118941]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block 2026-03-10T09:16:52.277 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:52 vm09 bash[118863]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block 2026-03-10T09:16:52.277 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:52 vm09 bash[118863]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 2026-03-10T09:16:52.277 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:52 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-1-activate[118941]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 2026-03-10T09:16:52.277 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:52 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-1-activate[118941]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 2026-03-10T09:16:52.277 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:52 vm09 bash[118863]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 2026-03-10T09:16:52.277 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:52 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-1-activate[118941]: --> ceph-volume lvm activate successful for osd ID: 1 2026-03-10T09:16:52.277 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:52 vm09 bash[118863]: --> ceph-volume lvm activate successful for osd ID: 1 2026-03-10T09:16:52.277 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:52 vm09 conmon[118941]: conmon 8ab33062a022467a53bd : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8ab33062a022467a53bd5d4fe6702877bd97cbde503d7b1654f4f8ae597947a4.scope/container/memory.events 2026-03-10T09:16:52.277 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:52 vm09 podman[118863]: 2026-03-10 09:16:52.153470646 +0000 UTC m=+2.273666695 container died 8ab33062a022467a53bd5d4fe6702877bd97cbde503d7b1654f4f8ae597947a4 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-10T09:16:52.277 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:52 vm09 podman[118863]: 2026-03-10 09:16:52.202100166 +0000 UTC m=+2.322296215 container remove 8ab33062a022467a53bd5d4fe6702877bd97cbde503d7b1654f4f8ae597947a4 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS) 2026-03-10T09:16:52.277 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:16:52 vm09 podman[119443]: 2026-03-10 09:16:52.00252303 +0000 UTC m=+0.073712343 container remove 10ce0f257fc871b44a453a3c39531564dbdc566e4c7b47d5565d110ae6996b4e (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-10T09:16:52.277 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:16:52 vm09 bash[119443]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-mgr-a 2026-03-10T09:16:52.277 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:16:52 vm09 systemd[1]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@mgr.a.service: Main process exited, code=exited, status=143/n/a 2026-03-10T09:16:52.277 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:16:52 vm09 systemd[1]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@mgr.a.service: Failed with result 'exit-code'. 2026-03-10T09:16:52.277 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:16:52 vm09 systemd[1]: Stopped Ceph mgr.a for 349a7c12-1c61-11f1-8c28-6d0db3d11b76. 2026-03-10T09:16:52.277 INFO:journalctl@ceph.mgr.a.vm09.stdout:Mar 10 09:16:52 vm09 systemd[1]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@mgr.a.service: Consumed 3.619s CPU time. 2026-03-10T09:16:52.581 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:52 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-0-activate[119064]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 2026-03-10T09:16:52.581 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:52 vm09 bash[119010]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 2026-03-10T09:16:52.581 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:52 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-0-activate[119064]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-6a2f294b-7f73-4339-96a4-16ac0ca8c981/osd-block-d0268d12-2d91-4c58-847f-4481a225bb98 --path /var/lib/ceph/osd/ceph-0 --no-mon-config 2026-03-10T09:16:52.581 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:52 vm09 bash[119010]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-6a2f294b-7f73-4339-96a4-16ac0ca8c981/osd-block-d0268d12-2d91-4c58-847f-4481a225bb98 --path /var/lib/ceph/osd/ceph-0 --no-mon-config 2026-03-10T09:16:52.581 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:52 vm09 podman[119675]: 2026-03-10 09:16:52.386012579 +0000 UTC m=+0.029771636 container create 1c99497d401812635e7191288a97cf20a4909f95b9221048309e4b47992bf94e (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-1, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) 2026-03-10T09:16:52.582 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:52 vm09 podman[119675]: 2026-03-10 09:16:52.434020616 +0000 UTC m=+0.077779673 container init 1c99497d401812635e7191288a97cf20a4909f95b9221048309e4b47992bf94e (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-1, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3) 2026-03-10T09:16:52.582 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:52 vm09 podman[119675]: 2026-03-10 09:16:52.442550313 +0000 UTC m=+0.086309359 container start 1c99497d401812635e7191288a97cf20a4909f95b9221048309e4b47992bf94e (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-1, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-10T09:16:52.582 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:52 vm09 bash[119675]: 1c99497d401812635e7191288a97cf20a4909f95b9221048309e4b47992bf94e 2026-03-10T09:16:52.582 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:52 vm09 podman[119675]: 2026-03-10 09:16:52.369275839 +0000 UTC m=+0.013034896 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T09:16:52.582 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:52 vm09 systemd[1]: Started Ceph osd.1 for 349a7c12-1c61-11f1-8c28-6d0db3d11b76. 2026-03-10T09:16:52.677 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T09:16:52.678 INFO:tasks.cephadm.mgr.a:Stopped mgr.a 2026-03-10T09:16:52.678 INFO:tasks.cephadm.osd.0:Stopping osd.0... 2026-03-10T09:16:52.678 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@osd.0 2026-03-10T09:16:52.854 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:52 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-0-activate[119064]: Running command: /usr/bin/ln -snf /dev/ceph-6a2f294b-7f73-4339-96a4-16ac0ca8c981/osd-block-d0268d12-2d91-4c58-847f-4481a225bb98 /var/lib/ceph/osd/ceph-0/block 2026-03-10T09:16:52.855 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:52 vm09 bash[119010]: Running command: /usr/bin/ln -snf /dev/ceph-6a2f294b-7f73-4339-96a4-16ac0ca8c981/osd-block-d0268d12-2d91-4c58-847f-4481a225bb98 /var/lib/ceph/osd/ceph-0/block 2026-03-10T09:16:52.855 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:52 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-0-activate[119064]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block 2026-03-10T09:16:52.855 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:52 vm09 bash[119010]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block 2026-03-10T09:16:52.855 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:52 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-0-activate[119064]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 2026-03-10T09:16:52.855 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:52 vm09 bash[119010]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 2026-03-10T09:16:52.855 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:52 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-0-activate[119064]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 2026-03-10T09:16:52.855 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:52 vm09 bash[119010]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 2026-03-10T09:16:52.855 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:52 vm09 ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-0-activate[119064]: --> ceph-volume lvm activate successful for osd ID: 0 2026-03-10T09:16:52.855 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:52 vm09 bash[119010]: --> ceph-volume lvm activate successful for osd ID: 0 2026-03-10T09:16:52.855 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:52 vm09 conmon[119064]: conmon 7941f5f36a3b6a4af27e : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7941f5f36a3b6a4af27eb69889b9f35953b6ba08cce076afef00cbbe3b3e0a3d.scope/container/memory.events 2026-03-10T09:16:52.855 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:52 vm09 podman[119010]: 2026-03-10 09:16:52.623528676 +0000 UTC m=+2.300225257 container died 7941f5f36a3b6a4af27eb69889b9f35953b6ba08cce076afef00cbbe3b3e0a3d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-0-activate, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-10T09:16:52.855 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:52 vm09 podman[119010]: 2026-03-10 09:16:52.652819931 +0000 UTC m=+2.329516512 container remove 7941f5f36a3b6a4af27eb69889b9f35953b6ba08cce076afef00cbbe3b3e0a3d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-0-activate, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223) 2026-03-10T09:16:52.855 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:52 vm09 podman[119771]: 2026-03-10 09:16:52.824689956 +0000 UTC m=+0.029127049 container create abcccef21a1d777a2a6fbb4108530f6895721712a17b7a00131c40e7ecf2911d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-0, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T09:16:53.121 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:52 vm09 podman[119771]: 2026-03-10 09:16:52.871496305 +0000 UTC m=+0.075933398 container init abcccef21a1d777a2a6fbb4108530f6895721712a17b7a00131c40e7ecf2911d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-0, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, OSD_FLAVOR=default) 2026-03-10T09:16:53.121 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:52 vm09 podman[119771]: 2026-03-10 09:16:52.874705338 +0000 UTC m=+0.079142432 container start abcccef21a1d777a2a6fbb4108530f6895721712a17b7a00131c40e7ecf2911d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-10T09:16:53.121 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:52 vm09 bash[119771]: abcccef21a1d777a2a6fbb4108530f6895721712a17b7a00131c40e7ecf2911d 2026-03-10T09:16:53.121 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:52 vm09 podman[119771]: 2026-03-10 09:16:52.811650022 +0000 UTC m=+0.016087115 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T09:16:53.121 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:52 vm09 podman[119805]: 2026-03-10 09:16:52.942336164 +0000 UTC m=+0.032568826 container died abcccef21a1d777a2a6fbb4108530f6895721712a17b7a00131c40e7ecf2911d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-10T09:16:53.121 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:52 vm09 podman[119805]: 2026-03-10 09:16:52.959312402 +0000 UTC m=+0.049545074 container remove abcccef21a1d777a2a6fbb4108530f6895721712a17b7a00131c40e7ecf2911d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True) 2026-03-10T09:16:53.121 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:52 vm09 bash[119805]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-0 2026-03-10T09:16:53.122 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:53 vm09 podman[119852]: 2026-03-10 09:16:53.102098208 +0000 UTC m=+0.017889916 container create b71adabceffc248e2fcc659ebb84488d1c11a873c4ea80a53d59870986c74eb3 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-0-deactivate, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-10T09:16:53.315 DEBUG:teuthology.orchestra.run.vm09:> sudo pkill -f 'journalctl -f -n 0 -u ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@osd.0.service' 2026-03-10T09:16:53.390 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:53 vm09 podman[119852]: 2026-03-10 09:16:53.142307357 +0000 UTC m=+0.058099075 container init b71adabceffc248e2fcc659ebb84488d1c11a873c4ea80a53d59870986c74eb3 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-0-deactivate, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default) 2026-03-10T09:16:53.390 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:53 vm09 podman[119852]: 2026-03-10 09:16:53.145954229 +0000 UTC m=+0.061745928 container start b71adabceffc248e2fcc659ebb84488d1c11a873c4ea80a53d59870986c74eb3 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-0-deactivate, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) 2026-03-10T09:16:53.390 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:53 vm09 podman[119852]: 2026-03-10 09:16:53.15113971 +0000 UTC m=+0.066931418 container attach b71adabceffc248e2fcc659ebb84488d1c11a873c4ea80a53d59870986c74eb3 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-0-deactivate, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-10T09:16:53.390 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:53 vm09 podman[119852]: 2026-03-10 09:16:53.09515994 +0000 UTC m=+0.010951658 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T09:16:53.390 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:53 vm09 podman[119852]: 2026-03-10 09:16:53.281887337 +0000 UTC m=+0.197679045 container died b71adabceffc248e2fcc659ebb84488d1c11a873c4ea80a53d59870986c74eb3 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-0-deactivate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3) 2026-03-10T09:16:53.390 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:53 vm09 podman[119852]: 2026-03-10 09:16:53.296225731 +0000 UTC m=+0.212017439 container remove b71adabceffc248e2fcc659ebb84488d1c11a873c4ea80a53d59870986c74eb3 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-0-deactivate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True) 2026-03-10T09:16:53.390 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:53 vm09 systemd[1]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@osd.0.service: Deactivated successfully. 2026-03-10T09:16:53.390 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:53 vm09 systemd[1]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@osd.0.service: Unit process 119863 (conmon) remains running after unit stopped. 2026-03-10T09:16:53.390 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:53 vm09 systemd[1]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@osd.0.service: Unit process 119871 (podman) remains running after unit stopped. 2026-03-10T09:16:53.390 INFO:journalctl@ceph.osd.0.vm09.stdout:Mar 10 09:16:53 vm09 systemd[1]: Stopped Ceph osd.0 for 349a7c12-1c61-11f1-8c28-6d0db3d11b76. 2026-03-10T09:16:53.785 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T09:16:53.785 INFO:tasks.cephadm.osd.0:Stopped osd.0 2026-03-10T09:16:53.785 INFO:tasks.cephadm.osd.1:Stopping osd.1... 2026-03-10T09:16:53.785 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@osd.1 2026-03-10T09:16:54.140 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:53 vm09 systemd[1]: Stopping Ceph osd.1 for 349a7c12-1c61-11f1-8c28-6d0db3d11b76... 2026-03-10T09:16:54.140 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:53 vm09 podman[119948]: 2026-03-10 09:16:53.976412906 +0000 UTC m=+0.035883086 container died 1c99497d401812635e7191288a97cf20a4909f95b9221048309e4b47992bf94e (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-1, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-10T09:16:54.140 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:53 vm09 podman[119948]: 2026-03-10 09:16:53.995918224 +0000 UTC m=+0.055388405 container remove 1c99497d401812635e7191288a97cf20a4909f95b9221048309e4b47992bf94e (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-1, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T09:16:54.140 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:53 vm09 bash[119948]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-1 2026-03-10T09:16:54.140 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:54 vm09 systemd[1]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@osd.1.service: Main process exited, code=exited, status=143/n/a 2026-03-10T09:16:54.417 DEBUG:teuthology.orchestra.run.vm09:> sudo pkill -f 'journalctl -f -n 0 -u ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@osd.1.service' 2026-03-10T09:16:54.640 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:54 vm09 podman[120013]: 2026-03-10 09:16:54.205529972 +0000 UTC m=+0.018277153 container create ff7986e14caaa093d0fe2a7a21c8ab773bf787df8ae268c7845c4123537b07c3 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-1-deactivate, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) 2026-03-10T09:16:54.641 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:54 vm09 podman[120013]: 2026-03-10 09:16:54.242594718 +0000 UTC m=+0.055341920 container init ff7986e14caaa093d0fe2a7a21c8ab773bf787df8ae268c7845c4123537b07c3 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-1-deactivate, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-10T09:16:54.641 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:54 vm09 podman[120013]: 2026-03-10 09:16:54.24607044 +0000 UTC m=+0.058817622 container start ff7986e14caaa093d0fe2a7a21c8ab773bf787df8ae268c7845c4123537b07c3 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-1-deactivate, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True) 2026-03-10T09:16:54.641 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:54 vm09 podman[120013]: 2026-03-10 09:16:54.251053641 +0000 UTC m=+0.063800832 container attach ff7986e14caaa093d0fe2a7a21c8ab773bf787df8ae268c7845c4123537b07c3 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-1-deactivate, io.buildah.version=1.41.3, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0) 2026-03-10T09:16:54.641 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:54 vm09 podman[120013]: 2026-03-10 09:16:54.197885681 +0000 UTC m=+0.010632883 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T09:16:54.641 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:54 vm09 conmon[120025]: conmon ff7986e14caaa093d0fe : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ff7986e14caaa093d0fe2a7a21c8ab773bf787df8ae268c7845c4123537b07c3.scope/container/memory.events 2026-03-10T09:16:54.641 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:54 vm09 podman[120013]: 2026-03-10 09:16:54.37548289 +0000 UTC m=+0.188230081 container died ff7986e14caaa093d0fe2a7a21c8ab773bf787df8ae268c7845c4123537b07c3 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-1-deactivate, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223) 2026-03-10T09:16:54.641 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:54 vm09 podman[120013]: 2026-03-10 09:16:54.396676126 +0000 UTC m=+0.209423317 container remove ff7986e14caaa093d0fe2a7a21c8ab773bf787df8ae268c7845c4123537b07c3 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-1-deactivate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True) 2026-03-10T09:16:54.641 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:54 vm09 systemd[1]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@osd.1.service: Failed with result 'exit-code'. 2026-03-10T09:16:54.641 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 10 09:16:54 vm09 systemd[1]: Stopped Ceph osd.1 for 349a7c12-1c61-11f1-8c28-6d0db3d11b76. 2026-03-10T09:16:54.873 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T09:16:54.873 INFO:tasks.cephadm.osd.1:Stopped osd.1 2026-03-10T09:16:54.873 INFO:tasks.cephadm.osd.2:Stopping osd.2... 2026-03-10T09:16:54.874 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@osd.2 2026-03-10T09:16:55.139 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:54 vm09 systemd[1]: Stopping Ceph osd.2 for 349a7c12-1c61-11f1-8c28-6d0db3d11b76... 2026-03-10T09:16:55.139 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:55 vm09 podman[120112]: 2026-03-10 09:16:55.028493346 +0000 UTC m=+0.038159153 container died b89bd504ce8a2a4e313924d177147a8abeb41639e703eb9872653e84d3f79314 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-10T09:16:55.140 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:55 vm09 podman[120112]: 2026-03-10 09:16:55.047438827 +0000 UTC m=+0.057104635 container remove b89bd504ce8a2a4e313924d177147a8abeb41639e703eb9872653e84d3f79314 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-10T09:16:55.140 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:55 vm09 bash[120112]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-2 2026-03-10T09:16:55.140 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:55 vm09 systemd[1]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@osd.2.service: Main process exited, code=exited, status=143/n/a 2026-03-10T09:16:55.483 DEBUG:teuthology.orchestra.run.vm09:> sudo pkill -f 'journalctl -f -n 0 -u ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@osd.2.service' 2026-03-10T09:16:55.512 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:55 vm09 podman[120176]: 2026-03-10 09:16:55.263659739 +0000 UTC m=+0.021640733 container create 67392602a59854aa117550fa9c0672c96703a851761d1686f25a11aa0f3f716a (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-2-deactivate, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default) 2026-03-10T09:16:55.513 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:55 vm09 podman[120176]: 2026-03-10 09:16:55.305712466 +0000 UTC m=+0.063693471 container init 67392602a59854aa117550fa9c0672c96703a851761d1686f25a11aa0f3f716a (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-2-deactivate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223) 2026-03-10T09:16:55.513 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:55 vm09 podman[120176]: 2026-03-10 09:16:55.308548071 +0000 UTC m=+0.066529076 container start 67392602a59854aa117550fa9c0672c96703a851761d1686f25a11aa0f3f716a (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-2-deactivate, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) 2026-03-10T09:16:55.513 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:55 vm09 podman[120176]: 2026-03-10 09:16:55.309527573 +0000 UTC m=+0.067508578 container attach 67392602a59854aa117550fa9c0672c96703a851761d1686f25a11aa0f3f716a (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-2-deactivate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-10T09:16:55.513 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:55 vm09 podman[120176]: 2026-03-10 09:16:55.255343081 +0000 UTC m=+0.013324086 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T09:16:55.513 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:55 vm09 conmon[120186]: conmon 67392602a59854aa1175 : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-67392602a59854aa117550fa9c0672c96703a851761d1686f25a11aa0f3f716a.scope/container/memory.events 2026-03-10T09:16:55.513 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:55 vm09 podman[120176]: 2026-03-10 09:16:55.45555292 +0000 UTC m=+0.213533925 container died 67392602a59854aa117550fa9c0672c96703a851761d1686f25a11aa0f3f716a (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-2-deactivate, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-10T09:16:55.513 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:55 vm09 podman[120176]: 2026-03-10 09:16:55.471596733 +0000 UTC m=+0.229577738 container remove 67392602a59854aa117550fa9c0672c96703a851761d1686f25a11aa0f3f716a (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76-osd-2-deactivate, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS) 2026-03-10T09:16:55.513 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:55 vm09 systemd[1]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@osd.2.service: Failed with result 'exit-code'. 2026-03-10T09:16:55.513 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:55 vm09 systemd[1]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@osd.2.service: Unit process 120186 (conmon) remains running after unit stopped. 2026-03-10T09:16:55.513 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:55 vm09 systemd[1]: ceph-349a7c12-1c61-11f1-8c28-6d0db3d11b76@osd.2.service: Unit process 120195 (podman) remains running after unit stopped. 2026-03-10T09:16:55.513 INFO:journalctl@ceph.osd.2.vm09.stdout:Mar 10 09:16:55 vm09 systemd[1]: Stopped Ceph osd.2 for 349a7c12-1c61-11f1-8c28-6d0db3d11b76. 2026-03-10T09:16:55.945 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T09:16:55.945 INFO:tasks.cephadm.osd.2:Stopped osd.2 2026-03-10T09:16:55.946 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 349a7c12-1c61-11f1-8c28-6d0db3d11b76 --force --keep-logs 2026-03-10T09:16:56.127 INFO:teuthology.orchestra.run.vm09.stdout:Deleting cluster with fsid: 349a7c12-1c61-11f1-8c28-6d0db3d11b76 2026-03-10T09:17:09.997 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T09:17:10.024 INFO:tasks.cephadm:Archiving crash dumps... 2026-03-10T09:17:10.024 DEBUG:teuthology.misc:Transferring archived files from vm09:/var/lib/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/crash to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/977/remote/vm09/crash 2026-03-10T09:17:10.024 DEBUG:teuthology.orchestra.run.vm09:> sudo tar c -f - -C /var/lib/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/crash -- . 2026-03-10T09:17:10.091 INFO:teuthology.orchestra.run.vm09.stderr:tar: /var/lib/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/crash: Cannot open: No such file or directory 2026-03-10T09:17:10.092 INFO:teuthology.orchestra.run.vm09.stderr:tar: Error is not recoverable: exiting now 2026-03-10T09:17:10.093 INFO:tasks.cephadm:Checking cluster log for badness... 2026-03-10T09:17:10.093 DEBUG:teuthology.orchestra.run.vm09:> sudo egrep '\[ERR\]|\[WRN\]|\[SEC\]' /var/log/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/ceph.log | egrep CEPHADM_ | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v CEPHADM_FAILED_DAEMON | head -n 1 2026-03-10T09:17:10.161 INFO:tasks.cephadm:Compressing logs... 2026-03-10T09:17:10.161 DEBUG:teuthology.orchestra.run.vm09:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T09:17:10.229 INFO:teuthology.orchestra.run.vm09.stderr:find: gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-10T09:17:10.229 INFO:teuthology.orchestra.run.vm09.stderr:‘/var/log/rbd-target-api’: No such file or directory 2026-03-10T09:17:10.230 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/ceph-mon.a.log 2026-03-10T09:17:10.230 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/ceph.log 2026-03-10T09:17:10.231 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/ceph-mgr.a.log 2026-03-10T09:17:10.232 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/cephadm.log: /var/log/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/ceph-mon.a.log: 90.6% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-10T09:17:10.233 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/ceph.log: 83.9% -- replaced with /var/log/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/ceph.log.gz 2026-03-10T09:17:10.233 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/ceph.audit.log 2026-03-10T09:17:10.239 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/ceph-mgr.a.log: gzip -5 --verbose -- /var/log/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/ceph.cephadm.log 2026-03-10T09:17:10.242 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/ceph.audit.log: 89.0% -- replaced with /var/log/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/ceph.audit.log.gz 2026-03-10T09:17:10.243 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/ceph-volume.log 2026-03-10T09:17:10.245 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/ceph.cephadm.log: 76.0% -- replaced with /var/log/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/ceph.cephadm.log.gz 2026-03-10T09:17:10.248 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/ceph-osd.0.log 2026-03-10T09:17:10.257 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/ceph-volume.log: gzip -5 --verbose -- /var/log/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/ceph-osd.1.log 2026-03-10T09:17:10.264 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/ceph-osd.0.log: gzip -5 --verbose -- /var/log/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/ceph-osd.2.log 2026-03-10T09:17:10.273 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/ceph-osd.1.log: 95.6% -- replaced with /var/log/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/ceph-volume.log.gz 2026-03-10T09:17:10.275 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/tcmu-runner.log 2026-03-10T09:17:10.289 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/ceph-osd.2.log: /var/log/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/tcmu-runner.log: 89.8% -- replaced with /var/log/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/ceph-mgr.a.log.gz 62.9% -- replaced with /var/log/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/tcmu-runner.log.gz 2026-03-10T09:17:10.289 INFO:teuthology.orchestra.run.vm09.stderr: 2026-03-10T09:17:10.306 INFO:teuthology.orchestra.run.vm09.stderr: 91.3% -- replaced with /var/log/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/ceph-mon.a.log.gz 2026-03-10T09:17:10.350 INFO:teuthology.orchestra.run.vm09.stderr: 94.7% -- replaced with /var/log/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/ceph-osd.2.log.gz 2026-03-10T09:17:10.367 INFO:teuthology.orchestra.run.vm09.stderr: 95.0% -- replaced with /var/log/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/ceph-osd.0.log.gz 2026-03-10T09:17:10.372 INFO:teuthology.orchestra.run.vm09.stderr: 95.0% -- replaced with /var/log/ceph/349a7c12-1c61-11f1-8c28-6d0db3d11b76/ceph-osd.1.log.gz 2026-03-10T09:17:10.373 INFO:teuthology.orchestra.run.vm09.stderr: 2026-03-10T09:17:10.373 INFO:teuthology.orchestra.run.vm09.stderr:real 0m0.156s 2026-03-10T09:17:10.373 INFO:teuthology.orchestra.run.vm09.stderr:user 0m0.259s 2026-03-10T09:17:10.373 INFO:teuthology.orchestra.run.vm09.stderr:sys 0m0.035s 2026-03-10T09:17:10.374 INFO:tasks.cephadm:Archiving logs... 2026-03-10T09:17:10.374 DEBUG:teuthology.misc:Transferring archived files from vm09:/var/log/ceph to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/977/remote/vm09/log 2026-03-10T09:17:10.374 DEBUG:teuthology.orchestra.run.vm09:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-10T09:17:10.455 INFO:tasks.cephadm:Removing cluster... 2026-03-10T09:17:10.455 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 349a7c12-1c61-11f1-8c28-6d0db3d11b76 --force 2026-03-10T09:17:10.638 INFO:teuthology.orchestra.run.vm09.stdout:Deleting cluster with fsid: 349a7c12-1c61-11f1-8c28-6d0db3d11b76 2026-03-10T09:17:10.898 INFO:tasks.cephadm:Removing cephadm ... 2026-03-10T09:17:10.898 DEBUG:teuthology.orchestra.run.vm09:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-10T09:17:10.915 INFO:tasks.cephadm:Teardown complete 2026-03-10T09:17:10.915 DEBUG:teuthology.run_tasks:Unwinding manager install 2026-03-10T09:17:10.919 INFO:teuthology.task.install.util:Removing shipped files: /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer... 2026-03-10T09:17:10.919 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-10T09:17:10.997 INFO:teuthology.task.install.rpm:Removing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd on rpm system. 2026-03-10T09:17:10.997 DEBUG:teuthology.orchestra.run.vm09:> 2026-03-10T09:17:10.998 DEBUG:teuthology.orchestra.run.vm09:> for d in ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd ; do 2026-03-10T09:17:10.998 DEBUG:teuthology.orchestra.run.vm09:> sudo yum -y remove $d || true 2026-03-10T09:17:10.998 DEBUG:teuthology.orchestra.run.vm09:> done 2026-03-10T09:17:11.369 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T09:17:11.370 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T09:17:11.370 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-10T09:17:11.370 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T09:17:11.370 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-10T09:17:11.370 INFO:teuthology.orchestra.run.vm09.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 39 M 2026-03-10T09:17:11.370 INFO:teuthology.orchestra.run.vm09.stdout:Removing unused dependencies: 2026-03-10T09:17:11.370 INFO:teuthology.orchestra.run.vm09.stdout: mailcap noarch 2.1.49-5.el9 @baseos 78 k 2026-03-10T09:17:11.370 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:17:11.370 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-10T09:17:11.370 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T09:17:11.370 INFO:teuthology.orchestra.run.vm09.stdout:Remove 2 Packages 2026-03-10T09:17:11.370 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:17:11.370 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 39 M 2026-03-10T09:17:11.370 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-10T09:17:11.374 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-10T09:17:11.374 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-10T09:17:11.399 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-10T09:17:11.400 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-10T09:17:11.433 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-10T09:17:11.456 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T09:17:11.456 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:17:11.456 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-10T09:17:11.456 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-radosgw.target". 2026-03-10T09:17:11.456 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-radosgw.target". 2026-03-10T09:17:11.456 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:17:11.458 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T09:17:11.473 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T09:17:11.494 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T09:17:11.567 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T09:17:11.567 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T09:17:11.634 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T09:17:11.634 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:17:11.634 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-10T09:17:11.634 INFO:teuthology.orchestra.run.vm09.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 mailcap-2.1.49-5.el9.noarch 2026-03-10T09:17:11.634 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:17:11.634 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T09:17:11.852 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T09:17:11.852 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T09:17:11.852 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-10T09:17:11.852 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T09:17:11.852 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-10T09:17:11.852 INFO:teuthology.orchestra.run.vm09.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 210 M 2026-03-10T09:17:11.852 INFO:teuthology.orchestra.run.vm09.stdout:Removing unused dependencies: 2026-03-10T09:17:11.852 INFO:teuthology.orchestra.run.vm09.stdout: libxslt x86_64 1.1.34-12.el9 @appstream 743 k 2026-03-10T09:17:11.852 INFO:teuthology.orchestra.run.vm09.stdout: socat x86_64 1.7.4.1-8.el9 @appstream 1.1 M 2026-03-10T09:17:11.852 INFO:teuthology.orchestra.run.vm09.stdout: xmlstarlet x86_64 1.6.1-20.el9 @appstream 195 k 2026-03-10T09:17:11.852 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:17:11.852 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-10T09:17:11.852 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T09:17:11.852 INFO:teuthology.orchestra.run.vm09.stdout:Remove 4 Packages 2026-03-10T09:17:11.852 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:17:11.852 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 212 M 2026-03-10T09:17:11.852 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-10T09:17:11.855 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-10T09:17:11.855 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-10T09:17:11.886 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-10T09:17:11.886 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-10T09:17:11.942 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-10T09:17:11.949 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-10T09:17:11.952 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : xmlstarlet-1.6.1-20.el9.x86_64 2/4 2026-03-10T09:17:11.955 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libxslt-1.1.34-12.el9.x86_64 3/4 2026-03-10T09:17:11.971 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-10T09:17:12.037 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-10T09:17:12.037 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-10T09:17:12.037 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 2/4 2026-03-10T09:17:12.037 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 3/4 2026-03-10T09:17:12.097 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 4/4 2026-03-10T09:17:12.097 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:17:12.097 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-10T09:17:12.097 INFO:teuthology.orchestra.run.vm09.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 libxslt-1.1.34-12.el9.x86_64 2026-03-10T09:17:12.097 INFO:teuthology.orchestra.run.vm09.stdout: socat-1.7.4.1-8.el9.x86_64 xmlstarlet-1.6.1-20.el9.x86_64 2026-03-10T09:17:12.097 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:17:12.097 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T09:17:12.329 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T09:17:12.330 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T09:17:12.330 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-10T09:17:12.330 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T09:17:12.330 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-10T09:17:12.330 INFO:teuthology.orchestra.run.vm09.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 0 2026-03-10T09:17:12.330 INFO:teuthology.orchestra.run.vm09.stdout:Removing unused dependencies: 2026-03-10T09:17:12.330 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 7.5 M 2026-03-10T09:17:12.330 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 18 M 2026-03-10T09:17:12.330 INFO:teuthology.orchestra.run.vm09.stdout: lua x86_64 5.4.4-4.el9 @appstream 593 k 2026-03-10T09:17:12.330 INFO:teuthology.orchestra.run.vm09.stdout: lua-devel x86_64 5.4.4-4.el9 @crb 49 k 2026-03-10T09:17:12.330 INFO:teuthology.orchestra.run.vm09.stdout: luarocks noarch 3.9.2-5.el9 @epel 692 k 2026-03-10T09:17:12.330 INFO:teuthology.orchestra.run.vm09.stdout: unzip x86_64 6.0-59.el9 @baseos 389 k 2026-03-10T09:17:12.330 INFO:teuthology.orchestra.run.vm09.stdout: zip x86_64 3.0-35.el9 @baseos 724 k 2026-03-10T09:17:12.330 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:17:12.330 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-10T09:17:12.331 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T09:17:12.331 INFO:teuthology.orchestra.run.vm09.stdout:Remove 8 Packages 2026-03-10T09:17:12.331 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:17:12.331 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 28 M 2026-03-10T09:17:12.331 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-10T09:17:12.333 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-10T09:17:12.334 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-10T09:17:12.366 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-10T09:17:12.366 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-10T09:17:12.430 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-10T09:17:12.436 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-10T09:17:12.440 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : luarocks-3.9.2-5.el9.noarch 2/8 2026-03-10T09:17:12.442 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : lua-devel-5.4.4-4.el9.x86_64 3/8 2026-03-10T09:17:12.445 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : zip-3.0-35.el9.x86_64 4/8 2026-03-10T09:17:12.449 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : unzip-6.0-59.el9.x86_64 5/8 2026-03-10T09:17:12.451 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : lua-5.4.4-4.el9.x86_64 6/8 2026-03-10T09:17:12.472 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T09:17:12.472 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:17:12.472 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-10T09:17:12.472 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mds.target". 2026-03-10T09:17:12.472 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mds.target". 2026-03-10T09:17:12.472 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:17:12.472 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T09:17:12.481 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T09:17:12.503 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T09:17:12.503 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:17:12.503 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-10T09:17:12.503 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mon.target". 2026-03-10T09:17:12.503 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mon.target". 2026-03-10T09:17:12.503 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:17:12.504 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T09:17:12.612 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T09:17:12.612 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-10T09:17:12.612 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2/8 2026-03-10T09:17:12.612 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 3/8 2026-03-10T09:17:12.612 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : lua-5.4.4-4.el9.x86_64 4/8 2026-03-10T09:17:12.613 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 5/8 2026-03-10T09:17:12.613 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 6/8 2026-03-10T09:17:12.613 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : unzip-6.0-59.el9.x86_64 7/8 2026-03-10T09:17:12.760 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : zip-3.0-35.el9.x86_64 8/8 2026-03-10T09:17:12.760 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:17:12.760 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-10T09:17:12.760 INFO:teuthology.orchestra.run.vm09.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:17:12.760 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:17:12.760 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:17:12.760 INFO:teuthology.orchestra.run.vm09.stdout: lua-5.4.4-4.el9.x86_64 2026-03-10T09:17:12.760 INFO:teuthology.orchestra.run.vm09.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-10T09:17:12.760 INFO:teuthology.orchestra.run.vm09.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-10T09:17:12.760 INFO:teuthology.orchestra.run.vm09.stdout: unzip-6.0-59.el9.x86_64 2026-03-10T09:17:12.760 INFO:teuthology.orchestra.run.vm09.stdout: zip-3.0-35.el9.x86_64 2026-03-10T09:17:12.760 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:17:12.760 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T09:17:13.010 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T09:17:13.016 INFO:teuthology.orchestra.run.vm09.stdout:=========================================================================================== 2026-03-10T09:17:13.016 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-10T09:17:13.016 INFO:teuthology.orchestra.run.vm09.stdout:=========================================================================================== 2026-03-10T09:17:13.016 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-10T09:17:13.016 INFO:teuthology.orchestra.run.vm09.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 23 M 2026-03-10T09:17:13.016 INFO:teuthology.orchestra.run.vm09.stdout:Removing dependent packages: 2026-03-10T09:17:13.016 INFO:teuthology.orchestra.run.vm09.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 431 k 2026-03-10T09:17:13.016 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.4 M 2026-03-10T09:17:13.016 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 806 k 2026-03-10T09:17:13.016 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 88 M 2026-03-10T09:17:13.016 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 66 M 2026-03-10T09:17:13.016 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 563 k 2026-03-10T09:17:13.016 INFO:teuthology.orchestra.run.vm09.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 59 M 2026-03-10T09:17:13.016 INFO:teuthology.orchestra.run.vm09.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.4 M 2026-03-10T09:17:13.016 INFO:teuthology.orchestra.run.vm09.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-10T09:17:13.016 INFO:teuthology.orchestra.run.vm09.stdout:Removing unused dependencies: 2026-03-10T09:17:13.016 INFO:teuthology.orchestra.run.vm09.stdout: abseil-cpp x86_64 20211102.0-4.el9 @epel 1.9 M 2026-03-10T09:17:13.016 INFO:teuthology.orchestra.run.vm09.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 85 M 2026-03-10T09:17:13.016 INFO:teuthology.orchestra.run.vm09.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 628 k 2026-03-10T09:17:13.016 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.5 M 2026-03-10T09:17:13.016 INFO:teuthology.orchestra.run.vm09.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 52 k 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 138 k 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: cryptsetup x86_64 2.8.1-3.el9 @baseos 770 k 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas x86_64 3.0.4-9.el9 @appstream 68 k 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 @appstream 11 M 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 @appstream 39 k 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: gperftools-libs x86_64 2.9.1-3.el9 @epel 1.4 M 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: grpc-data noarch 1.46.7-10.el9 @epel 13 k 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: ledmon-libs x86_64 1.1.0-3.el9 @baseos 80 k 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 425 k 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: libconfig x86_64 1.7.2-9.el9 @baseos 220 k 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: libgfortran x86_64 11.5.0-14.el9 @baseos 2.8 M 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: liboath x86_64 2.6.12-1.el9 @epel 94 k 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: libquadmath x86_64 11.5.0-14.el9 @baseos 330 k 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.6 M 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: libstoragemgmt x86_64 1.10.1-1.el9 @appstream 685 k 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: libunwind x86_64 1.6.2-1.el9 @epel 170 k 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: openblas x86_64 0.3.29-1.el9 @appstream 112 k 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: openblas-openmp x86_64 0.3.29-1.el9 @appstream 46 M 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: pciutils x86_64 3.7.0-7.el9 @baseos 216 k 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: protobuf x86_64 3.14.0-17.el9 @appstream 3.5 M 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: protobuf-compiler x86_64 3.14.0-17.el9 @crb 2.9 M 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: python3-asyncssh noarch 2.13.2-5.el9 @epel 3.9 M 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: python3-autocommand noarch 2.2.2-8.el9 @epel 82 k 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: python3-babel noarch 2.9.1-2.el9 @appstream 27 M 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 @epel 254 k 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: python3-bcrypt x86_64 3.2.2-1.el9 @epel 87 k 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: python3-cachetools noarch 4.2.4-1.el9 @epel 93 k 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 702 k 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: python3-certifi noarch 2023.05.07-4.el9 @epel 6.3 k 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: python3-cffi x86_64 1.14.5-5.el9 @baseos 1.0 M 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: python3-chardet noarch 4.0.0-5.el9 @anaconda 1.4 M 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: python3-cheroot noarch 10.0.1-4.el9 @epel 682 k 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy noarch 18.6.1-2.el9 @epel 1.1 M 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: python3-cryptography x86_64 36.0.1-5.el9 @baseos 4.5 M 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: python3-devel x86_64 3.9.25-3.el9 @appstream 765 k 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: python3-google-auth noarch 1:2.45.0-1.el9 @epel 1.4 M 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: python3-grpcio x86_64 1.46.7-10.el9 @epel 6.7 M 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 @epel 418 k 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: python3-idna noarch 2.10-7.el9.1 @anaconda 513 k 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco noarch 8.2.1-3.el9 @epel 3.7 k 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 @epel 24 k 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 @epel 55 k 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-context noarch 6.0.1-3.el9 @epel 31 k 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 @epel 33 k 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-text noarch 4.0.0-2.el9 @epel 51 k 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: python3-jinja2 noarch 2.11.3-8.el9 @appstream 1.1 M 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: python3-jsonpatch noarch 1.21-16.el9 @koji-override-0 55 k 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: python3-jsonpointer noarch 2.0-4.el9 @koji-override-0 34 k 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 @epel 21 M 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 @appstream 832 k 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: python3-logutils noarch 0.3.5-21.el9 @epel 126 k 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: python3-mako noarch 1.1.4-6.el9 @appstream 534 k 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: python3-markupsafe x86_64 1.1.1-12.el9 @appstream 60 k 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: python3-more-itertools noarch 8.12.0-2.el9 @epel 378 k 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: python3-natsort noarch 7.1.1-5.el9 @epel 215 k 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy x86_64 1:1.23.5-2.el9 @appstream 30 M 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 @appstream 1.7 M 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: python3-oauthlib noarch 3.1.1-5.el9 @koji-override-0 888 k 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: python3-packaging noarch 20.9-5.el9 @appstream 248 k 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: python3-pecan noarch 1.4.2-3.el9 @epel 1.3 M 2026-03-10T09:17:13.017 INFO:teuthology.orchestra.run.vm09.stdout: python3-ply noarch 3.11-14.el9 @baseos 430 k 2026-03-10T09:17:13.018 INFO:teuthology.orchestra.run.vm09.stdout: python3-portend noarch 3.1.0-2.el9 @epel 20 k 2026-03-10T09:17:13.018 INFO:teuthology.orchestra.run.vm09.stdout: python3-prettytable noarch 0.7.2-27.el9 @koji-override-0 166 k 2026-03-10T09:17:13.018 INFO:teuthology.orchestra.run.vm09.stdout: python3-protobuf noarch 3.14.0-17.el9 @appstream 1.4 M 2026-03-10T09:17:13.018 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 @epel 389 k 2026-03-10T09:17:13.018 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1 noarch 0.4.8-7.el9 @appstream 622 k 2026-03-10T09:17:13.018 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 @appstream 1.0 M 2026-03-10T09:17:13.018 INFO:teuthology.orchestra.run.vm09.stdout: python3-pycparser noarch 2.20-6.el9 @baseos 745 k 2026-03-10T09:17:13.018 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyparsing noarch 2.4.7-9.el9 @baseos 635 k 2026-03-10T09:17:13.018 INFO:teuthology.orchestra.run.vm09.stdout: python3-pysocks noarch 1.7.1-12.el9 @anaconda 88 k 2026-03-10T09:17:13.018 INFO:teuthology.orchestra.run.vm09.stdout: python3-pytz noarch 2021.1-5.el9 @koji-override-0 176 k 2026-03-10T09:17:13.018 INFO:teuthology.orchestra.run.vm09.stdout: python3-repoze-lru noarch 0.7-16.el9 @epel 83 k 2026-03-10T09:17:13.018 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests noarch 2.25.1-10.el9 @baseos 405 k 2026-03-10T09:17:13.018 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 @appstream 119 k 2026-03-10T09:17:13.018 INFO:teuthology.orchestra.run.vm09.stdout: python3-routes noarch 2.5.1-5.el9 @epel 459 k 2026-03-10T09:17:13.018 INFO:teuthology.orchestra.run.vm09.stdout: python3-rsa noarch 4.9-2.el9 @epel 202 k 2026-03-10T09:17:13.018 INFO:teuthology.orchestra.run.vm09.stdout: python3-scipy x86_64 1.9.3-2.el9 @appstream 76 M 2026-03-10T09:17:13.018 INFO:teuthology.orchestra.run.vm09.stdout: python3-tempora noarch 5.0.0-2.el9 @epel 96 k 2026-03-10T09:17:13.018 INFO:teuthology.orchestra.run.vm09.stdout: python3-toml noarch 0.10.2-6.el9 @appstream 99 k 2026-03-10T09:17:13.018 INFO:teuthology.orchestra.run.vm09.stdout: python3-typing-extensions noarch 4.15.0-1.el9 @epel 447 k 2026-03-10T09:17:13.018 INFO:teuthology.orchestra.run.vm09.stdout: python3-urllib3 noarch 1.26.5-7.el9 @baseos 746 k 2026-03-10T09:17:13.018 INFO:teuthology.orchestra.run.vm09.stdout: python3-webob noarch 1.8.8-2.el9 @epel 1.2 M 2026-03-10T09:17:13.018 INFO:teuthology.orchestra.run.vm09.stdout: python3-websocket-client noarch 1.2.3-2.el9 @epel 319 k 2026-03-10T09:17:13.018 INFO:teuthology.orchestra.run.vm09.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 @epel 1.9 M 2026-03-10T09:17:13.018 INFO:teuthology.orchestra.run.vm09.stdout: python3-zc-lockfile noarch 2.0-10.el9 @epel 35 k 2026-03-10T09:17:13.018 INFO:teuthology.orchestra.run.vm09.stdout: qatlib x86_64 25.08.0-2.el9 @appstream 639 k 2026-03-10T09:17:13.018 INFO:teuthology.orchestra.run.vm09.stdout: qatlib-service x86_64 25.08.0-2.el9 @appstream 69 k 2026-03-10T09:17:13.018 INFO:teuthology.orchestra.run.vm09.stdout: qatzip-libs x86_64 1.3.1-1.el9 @appstream 148 k 2026-03-10T09:17:13.018 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:17:13.018 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-10T09:17:13.018 INFO:teuthology.orchestra.run.vm09.stdout:=========================================================================================== 2026-03-10T09:17:13.018 INFO:teuthology.orchestra.run.vm09.stdout:Remove 103 Packages 2026-03-10T09:17:13.018 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:17:13.018 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 613 M 2026-03-10T09:17:13.018 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-10T09:17:13.047 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-10T09:17:13.047 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-10T09:17:13.196 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-10T09:17:13.197 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-10T09:17:13.373 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-10T09:17:13.373 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/103 2026-03-10T09:17:13.383 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/103 2026-03-10T09:17:13.402 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/103 2026-03-10T09:17:13.402 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:17:13.402 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-10T09:17:13.402 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mgr.target". 2026-03-10T09:17:13.402 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mgr.target". 2026-03-10T09:17:13.402 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:17:13.403 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/103 2026-03-10T09:17:13.416 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/103 2026-03-10T09:17:13.436 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 3/103 2026-03-10T09:17:13.436 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/103 2026-03-10T09:17:13.503 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/103 2026-03-10T09:17:13.513 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-kubernetes-1:26.1.0-3.el9.noarch 5/103 2026-03-10T09:17:13.519 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-requests-oauthlib-1.3.0-12.el9.noarch 6/103 2026-03-10T09:17:13.520 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/103 2026-03-10T09:17:13.536 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/103 2026-03-10T09:17:13.545 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-cherrypy-18.6.1-2.el9.noarch 8/103 2026-03-10T09:17:13.553 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-cheroot-10.0.1-4.el9.noarch 9/103 2026-03-10T09:17:13.566 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-grpcio-tools-1.46.7-10.el9.x86_64 10/103 2026-03-10T09:17:13.572 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-grpcio-1.46.7-10.el9.x86_64 11/103 2026-03-10T09:17:13.601 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/103 2026-03-10T09:17:13.601 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:17:13.601 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-10T09:17:13.601 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-osd.target". 2026-03-10T09:17:13.601 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-osd.target". 2026-03-10T09:17:13.602 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:17:13.602 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/103 2026-03-10T09:17:13.618 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/103 2026-03-10T09:17:13.643 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/103 2026-03-10T09:17:13.643 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:17:13.643 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-10T09:17:13.643 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:17:13.655 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/103 2026-03-10T09:17:13.669 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/103 2026-03-10T09:17:13.672 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jaraco-collections-3.0.0-8.el9.noarch 14/103 2026-03-10T09:17:13.677 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jaraco-text-4.0.0-2.el9.noarch 15/103 2026-03-10T09:17:13.682 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jinja2-2.11.3-8.el9.noarch 16/103 2026-03-10T09:17:13.693 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-requests-2.25.1-10.el9.noarch 17/103 2026-03-10T09:17:13.707 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-google-auth-1:2.45.0-1.el9.noarch 18/103 2026-03-10T09:17:13.715 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pecan-1.4.2-3.el9.noarch 19/103 2026-03-10T09:17:13.728 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-rsa-4.9-2.el9.noarch 20/103 2026-03-10T09:17:13.736 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pyasn1-modules-0.4.8-7.el9.noarch 21/103 2026-03-10T09:17:13.772 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-urllib3-1.26.5-7.el9.noarch 22/103 2026-03-10T09:17:13.781 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-babel-2.9.1-2.el9.noarch 23/103 2026-03-10T09:17:13.784 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jaraco-classes-3.2.1-5.el9.noarch 24/103 2026-03-10T09:17:13.794 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pyOpenSSL-21.0.0-1.el9.noarch 25/103 2026-03-10T09:17:13.801 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-asyncssh-2.13.2-5.el9.noarch 26/103 2026-03-10T09:17:13.801 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/103 2026-03-10T09:17:13.811 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/103 2026-03-10T09:17:13.929 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jsonpatch-1.21-16.el9.noarch 28/103 2026-03-10T09:17:13.946 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-scipy-1.9.3-2.el9.x86_64 29/103 2026-03-10T09:17:13.963 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/103 2026-03-10T09:17:13.964 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/libstoragemgmt.service". 2026-03-10T09:17:13.964 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:17:13.966 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libstoragemgmt-1.10.1-1.el9.x86_64 30/103 2026-03-10T09:17:14.004 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/103 2026-03-10T09:17:14.023 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 31/103 2026-03-10T09:17:14.029 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-cryptography-36.0.1-5.el9.x86_64 32/103 2026-03-10T09:17:14.032 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : protobuf-compiler-3.14.0-17.el9.x86_64 33/103 2026-03-10T09:17:14.035 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-bcrypt-3.2.2-1.el9.x86_64 34/103 2026-03-10T09:17:14.063 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/103 2026-03-10T09:17:14.063 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:17:14.063 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-10T09:17:14.063 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target". 2026-03-10T09:17:14.063 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target". 2026-03-10T09:17:14.063 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:17:14.064 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/103 2026-03-10T09:17:14.081 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/103 2026-03-10T09:17:14.087 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-mako-1.1.4-6.el9.noarch 36/103 2026-03-10T09:17:14.092 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jaraco-context-6.0.1-3.el9.noarch 37/103 2026-03-10T09:17:14.095 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-packaging-20.9-5.el9.noarch 38/103 2026-03-10T09:17:14.098 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-portend-3.1.0-2.el9.noarch 39/103 2026-03-10T09:17:14.101 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-tempora-5.0.0-2.el9.noarch 40/103 2026-03-10T09:17:14.105 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jaraco-functools-3.5.0-2.el9.noarch 41/103 2026-03-10T09:17:14.110 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-routes-2.5.1-5.el9.noarch 42/103 2026-03-10T09:17:14.115 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-cffi-1.14.5-5.el9.x86_64 43/103 2026-03-10T09:17:14.171 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pycparser-2.20-6.el9.noarch 44/103 2026-03-10T09:17:14.183 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-numpy-1:1.23.5-2.el9.x86_64 45/103 2026-03-10T09:17:14.186 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : flexiblas-netlib-3.0.4-9.el9.x86_64 46/103 2026-03-10T09:17:14.188 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 47/103 2026-03-10T09:17:14.190 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : openblas-openmp-0.3.29-1.el9.x86_64 48/103 2026-03-10T09:17:14.194 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libgfortran-11.5.0-14.el9.x86_64 49/103 2026-03-10T09:17:14.197 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 50/103 2026-03-10T09:17:14.222 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 51/103 2026-03-10T09:17:14.222 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T09:17:14.222 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-10T09:17:14.223 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:17:14.223 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 51/103 2026-03-10T09:17:14.233 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 51/103 2026-03-10T09:17:14.236 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : openblas-0.3.29-1.el9.x86_64 52/103 2026-03-10T09:17:14.239 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : flexiblas-3.0.4-9.el9.x86_64 53/103 2026-03-10T09:17:14.242 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-ply-3.11-14.el9.noarch 54/103 2026-03-10T09:17:14.244 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-repoze-lru-0.7-16.el9.noarch 55/103 2026-03-10T09:17:14.247 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jaraco-8.2.1-3.el9.noarch 56/103 2026-03-10T09:17:14.250 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-more-itertools-8.12.0-2.el9.noarch 57/103 2026-03-10T09:17:14.253 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-toml-0.10.2-6.el9.noarch 58/103 2026-03-10T09:17:14.257 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pytz-2021.1-5.el9.noarch 59/103 2026-03-10T09:17:14.260 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pyparsing-2.4.7-9.el9.noarch 60/103 2026-03-10T09:17:14.270 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-backports-tarfile-1.2.0-1.el9.noarch 61/103 2026-03-10T09:17:14.276 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-devel-3.9.25-3.el9.x86_64 62/103 2026-03-10T09:17:14.278 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jsonpointer-2.0-4.el9.noarch 63/103 2026-03-10T09:17:14.282 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-typing-extensions-4.15.0-1.el9.noarch 64/103 2026-03-10T09:17:14.286 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-idna-2.10-7.el9.1.noarch 65/103 2026-03-10T09:17:14.293 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pysocks-1.7.1-12.el9.noarch 66/103 2026-03-10T09:17:14.300 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pyasn1-0.4.8-7.el9.noarch 67/103 2026-03-10T09:17:14.306 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-logutils-0.3.5-21.el9.noarch 68/103 2026-03-10T09:17:14.312 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-webob-1.8.8-2.el9.noarch 69/103 2026-03-10T09:17:14.320 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-cachetools-4.2.4-1.el9.noarch 70/103 2026-03-10T09:17:14.324 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-chardet-4.0.0-5.el9.noarch 71/103 2026-03-10T09:17:14.327 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-autocommand-2.2.2-8.el9.noarch 72/103 2026-03-10T09:17:14.334 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : grpc-data-1.46.7-10.el9.noarch 73/103 2026-03-10T09:17:14.338 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-protobuf-3.14.0-17.el9.noarch 74/103 2026-03-10T09:17:14.342 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-zc-lockfile-2.0-10.el9.noarch 75/103 2026-03-10T09:17:14.352 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-natsort-7.1.1-5.el9.noarch 76/103 2026-03-10T09:17:14.359 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-oauthlib-3.1.1-5.el9.noarch 77/103 2026-03-10T09:17:14.364 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-websocket-client-1.2.3-2.el9.noarch 78/103 2026-03-10T09:17:14.366 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-certifi-2023.05.07-4.el9.noarch 79/103 2026-03-10T09:17:14.368 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 80/103 2026-03-10T09:17:14.375 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 81/103 2026-03-10T09:17:14.381 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-werkzeug-2.0.3-3.el9.1.noarch 82/103 2026-03-10T09:17:14.406 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 83/103 2026-03-10T09:17:14.406 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-crash.service". 2026-03-10T09:17:14.406 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:17:14.415 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 83/103 2026-03-10T09:17:14.440 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 83/103 2026-03-10T09:17:14.440 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 84/103 2026-03-10T09:17:14.454 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 84/103 2026-03-10T09:17:14.460 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : qatzip-libs-1.3.1-1.el9.x86_64 85/103 2026-03-10T09:17:14.463 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 86/103 2026-03-10T09:17:14.465 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-prettytable-0.7.2-27.el9.noarch 87/103 2026-03-10T09:17:14.465 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 88/103 2026-03-10T09:17:20.447 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 88/103 2026-03-10T09:17:20.447 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /sys 2026-03-10T09:17:20.447 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /proc 2026-03-10T09:17:20.447 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /mnt 2026-03-10T09:17:20.447 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /var/tmp 2026-03-10T09:17:20.447 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /home 2026-03-10T09:17:20.447 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /root 2026-03-10T09:17:20.447 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /tmp 2026-03-10T09:17:20.447 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:17:20.457 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : qatlib-25.08.0-2.el9.x86_64 89/103 2026-03-10T09:17:20.477 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 90/103 2026-03-10T09:17:20.477 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : qatlib-service-25.08.0-2.el9.x86_64 90/103 2026-03-10T09:17:20.485 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 90/103 2026-03-10T09:17:20.489 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : gperftools-libs-2.9.1-3.el9.x86_64 91/103 2026-03-10T09:17:20.492 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libunwind-1.6.2-1.el9.x86_64 92/103 2026-03-10T09:17:20.495 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : pciutils-3.7.0-7.el9.x86_64 93/103 2026-03-10T09:17:20.498 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : liboath-2.6.12-1.el9.x86_64 94/103 2026-03-10T09:17:20.498 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 95/103 2026-03-10T09:17:20.516 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 95/103 2026-03-10T09:17:20.518 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ledmon-libs-1.1.0-3.el9.x86_64 96/103 2026-03-10T09:17:20.521 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libquadmath-11.5.0-14.el9.x86_64 97/103 2026-03-10T09:17:20.524 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-markupsafe-1.1.1-12.el9.x86_64 98/103 2026-03-10T09:17:20.527 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : protobuf-3.14.0-17.el9.x86_64 99/103 2026-03-10T09:17:20.532 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libconfig-1.7.2-9.el9.x86_64 100/103 2026-03-10T09:17:20.541 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : cryptsetup-2.8.1-3.el9.x86_64 101/103 2026-03-10T09:17:20.546 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : abseil-cpp-20211102.0-4.el9.x86_64 102/103 2026-03-10T09:17:20.546 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 103/103 2026-03-10T09:17:20.650 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 103/103 2026-03-10T09:17:20.650 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 1/103 2026-03-10T09:17:20.650 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/103 2026-03-10T09:17:20.650 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/103 2026-03-10T09:17:20.650 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 4/103 2026-03-10T09:17:20.650 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/103 2026-03-10T09:17:20.650 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 6/103 2026-03-10T09:17:20.650 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/103 2026-03-10T09:17:20.651 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 8/103 2026-03-10T09:17:20.651 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 9/103 2026-03-10T09:17:20.651 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 10/103 2026-03-10T09:17:20.651 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 11/103 2026-03-10T09:17:20.651 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/103 2026-03-10T09:17:20.651 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 13/103 2026-03-10T09:17:20.651 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 14/103 2026-03-10T09:17:20.651 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 15/103 2026-03-10T09:17:20.651 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 16/103 2026-03-10T09:17:20.651 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 17/103 2026-03-10T09:17:20.651 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 18/103 2026-03-10T09:17:20.651 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 19/103 2026-03-10T09:17:20.651 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 20/103 2026-03-10T09:17:20.651 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 21/103 2026-03-10T09:17:20.651 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 22/103 2026-03-10T09:17:20.651 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 23/103 2026-03-10T09:17:20.651 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 24/103 2026-03-10T09:17:20.651 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 25/103 2026-03-10T09:17:20.651 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 26/103 2026-03-10T09:17:20.652 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 27/103 2026-03-10T09:17:20.652 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 28/103 2026-03-10T09:17:20.652 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 29/103 2026-03-10T09:17:20.652 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 30/103 2026-03-10T09:17:20.652 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 31/103 2026-03-10T09:17:20.652 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 32/103 2026-03-10T09:17:20.652 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 33/103 2026-03-10T09:17:20.652 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 34/103 2026-03-10T09:17:20.652 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 35/103 2026-03-10T09:17:20.652 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 36/103 2026-03-10T09:17:20.652 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 37/103 2026-03-10T09:17:20.652 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 38/103 2026-03-10T09:17:20.652 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 39/103 2026-03-10T09:17:20.652 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 40/103 2026-03-10T09:17:20.652 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 41/103 2026-03-10T09:17:20.652 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 42/103 2026-03-10T09:17:20.652 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 43/103 2026-03-10T09:17:20.652 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/103 2026-03-10T09:17:20.652 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-chardet-4.0.0-5.el9.noarch 45/103 2026-03-10T09:17:20.652 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 46/103 2026-03-10T09:17:20.652 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 47/103 2026-03-10T09:17:20.652 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 48/103 2026-03-10T09:17:20.652 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 49/103 2026-03-10T09:17:20.652 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 50/103 2026-03-10T09:17:20.652 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 51/103 2026-03-10T09:17:20.652 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 52/103 2026-03-10T09:17:20.652 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-idna-2.10-7.el9.1.noarch 53/103 2026-03-10T09:17:20.652 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 54/103 2026-03-10T09:17:20.653 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 55/103 2026-03-10T09:17:20.653 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 56/103 2026-03-10T09:17:20.653 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 57/103 2026-03-10T09:17:20.653 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 58/103 2026-03-10T09:17:20.653 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 59/103 2026-03-10T09:17:20.653 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 60/103 2026-03-10T09:17:20.653 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jsonpatch-1.21-16.el9.noarch 61/103 2026-03-10T09:17:20.653 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jsonpointer-2.0-4.el9.noarch 62/103 2026-03-10T09:17:20.653 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 63/103 2026-03-10T09:17:20.653 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 64/103 2026-03-10T09:17:20.653 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 65/103 2026-03-10T09:17:20.653 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 66/103 2026-03-10T09:17:20.653 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 67/103 2026-03-10T09:17:20.653 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 68/103 2026-03-10T09:17:20.653 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 69/103 2026-03-10T09:17:20.653 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 70/103 2026-03-10T09:17:20.653 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 71/103 2026-03-10T09:17:20.653 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-oauthlib-3.1.1-5.el9.noarch 72/103 2026-03-10T09:17:20.653 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 73/103 2026-03-10T09:17:20.653 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 74/103 2026-03-10T09:17:20.653 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-ply-3.11-14.el9.noarch 75/103 2026-03-10T09:17:20.653 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 76/103 2026-03-10T09:17:20.653 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-prettytable-0.7.2-27.el9.noarch 77/103 2026-03-10T09:17:20.653 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 78/103 2026-03-10T09:17:20.653 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 79/103 2026-03-10T09:17:20.653 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 80/103 2026-03-10T09:17:20.653 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 81/103 2026-03-10T09:17:20.653 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 82/103 2026-03-10T09:17:20.653 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.noarch 83/103 2026-03-10T09:17:20.653 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pysocks-1.7.1-12.el9.noarch 84/103 2026-03-10T09:17:20.654 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pytz-2021.1-5.el9.noarch 85/103 2026-03-10T09:17:20.654 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 86/103 2026-03-10T09:17:20.654 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 87/103 2026-03-10T09:17:20.654 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 88/103 2026-03-10T09:17:20.654 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 89/103 2026-03-10T09:17:20.654 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 90/103 2026-03-10T09:17:20.654 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 91/103 2026-03-10T09:17:20.654 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 92/103 2026-03-10T09:17:20.654 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 93/103 2026-03-10T09:17:20.654 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 94/103 2026-03-10T09:17:20.654 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 95/103 2026-03-10T09:17:20.654 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 96/103 2026-03-10T09:17:20.654 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 97/103 2026-03-10T09:17:20.654 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 98/103 2026-03-10T09:17:20.654 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 99/103 2026-03-10T09:17:20.654 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 100/103 2026-03-10T09:17:20.654 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 101/103 2026-03-10T09:17:20.655 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 102/103 2026-03-10T09:17:20.738 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 103/103 2026-03-10T09:17:20.738 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:17:20.738 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: python3-chardet-4.0.0-5.el9.noarch 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: python3-idna-2.10-7.el9.1.noarch 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-10T09:17:20.739 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-10T09:17:20.740 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-10T09:17:20.740 INFO:teuthology.orchestra.run.vm09.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-10T09:17:20.740 INFO:teuthology.orchestra.run.vm09.stdout: python3-jsonpatch-1.21-16.el9.noarch 2026-03-10T09:17:20.740 INFO:teuthology.orchestra.run.vm09.stdout: python3-jsonpointer-2.0-4.el9.noarch 2026-03-10T09:17:20.740 INFO:teuthology.orchestra.run.vm09.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-10T09:17:20.740 INFO:teuthology.orchestra.run.vm09.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T09:17:20.740 INFO:teuthology.orchestra.run.vm09.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-10T09:17:20.740 INFO:teuthology.orchestra.run.vm09.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-10T09:17:20.740 INFO:teuthology.orchestra.run.vm09.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-10T09:17:20.740 INFO:teuthology.orchestra.run.vm09.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-10T09:17:20.740 INFO:teuthology.orchestra.run.vm09.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-10T09:17:20.740 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-10T09:17:20.740 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-10T09:17:20.740 INFO:teuthology.orchestra.run.vm09.stdout: python3-oauthlib-3.1.1-5.el9.noarch 2026-03-10T09:17:20.740 INFO:teuthology.orchestra.run.vm09.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-10T09:17:20.740 INFO:teuthology.orchestra.run.vm09.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-10T09:17:20.740 INFO:teuthology.orchestra.run.vm09.stdout: python3-ply-3.11-14.el9.noarch 2026-03-10T09:17:20.740 INFO:teuthology.orchestra.run.vm09.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-10T09:17:20.740 INFO:teuthology.orchestra.run.vm09.stdout: python3-prettytable-0.7.2-27.el9.noarch 2026-03-10T09:17:20.740 INFO:teuthology.orchestra.run.vm09.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-10T09:17:20.740 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-10T09:17:20.740 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-10T09:17:20.740 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-10T09:17:20.740 INFO:teuthology.orchestra.run.vm09.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-10T09:17:20.740 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyparsing-2.4.7-9.el9.noarch 2026-03-10T09:17:20.740 INFO:teuthology.orchestra.run.vm09.stdout: python3-pysocks-1.7.1-12.el9.noarch 2026-03-10T09:17:20.740 INFO:teuthology.orchestra.run.vm09.stdout: python3-pytz-2021.1-5.el9.noarch 2026-03-10T09:17:20.740 INFO:teuthology.orchestra.run.vm09.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-10T09:17:20.740 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-10T09:17:20.740 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-10T09:17:20.740 INFO:teuthology.orchestra.run.vm09.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-10T09:17:20.740 INFO:teuthology.orchestra.run.vm09.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-10T09:17:20.740 INFO:teuthology.orchestra.run.vm09.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-10T09:17:20.740 INFO:teuthology.orchestra.run.vm09.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-10T09:17:20.740 INFO:teuthology.orchestra.run.vm09.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-10T09:17:20.740 INFO:teuthology.orchestra.run.vm09.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-10T09:17:20.740 INFO:teuthology.orchestra.run.vm09.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-10T09:17:20.740 INFO:teuthology.orchestra.run.vm09.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-10T09:17:20.740 INFO:teuthology.orchestra.run.vm09.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-10T09:17:20.740 INFO:teuthology.orchestra.run.vm09.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-10T09:17:20.740 INFO:teuthology.orchestra.run.vm09.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-10T09:17:20.740 INFO:teuthology.orchestra.run.vm09.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-10T09:17:20.740 INFO:teuthology.orchestra.run.vm09.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-10T09:17:20.740 INFO:teuthology.orchestra.run.vm09.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-10T09:17:20.740 INFO:teuthology.orchestra.run.vm09.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:17:20.740 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:17:20.740 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T09:17:20.958 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T09:17:20.959 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T09:17:20.959 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-10T09:17:20.959 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T09:17:20.959 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-10T09:17:20.959 INFO:teuthology.orchestra.run.vm09.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 775 k 2026-03-10T09:17:20.959 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:17:20.959 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-10T09:17:20.959 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T09:17:20.959 INFO:teuthology.orchestra.run.vm09.stdout:Remove 1 Package 2026-03-10T09:17:20.959 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:17:20.959 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 775 k 2026-03-10T09:17:20.959 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-10T09:17:20.961 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-10T09:17:20.961 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-10T09:17:20.962 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-10T09:17:20.962 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-10T09:17:20.979 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-10T09:17:20.979 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T09:17:21.100 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T09:17:21.148 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T09:17:21.149 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:17:21.149 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-10T09:17:21.149 INFO:teuthology.orchestra.run.vm09.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T09:17:21.149 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:17:21.149 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T09:17:21.336 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: ceph-immutable-object-cache 2026-03-10T09:17:21.336 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T09:17:21.340 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T09:17:21.340 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T09:17:21.341 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T09:17:21.521 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: ceph-mgr 2026-03-10T09:17:21.521 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T09:17:21.525 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T09:17:21.526 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T09:17:21.526 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T09:17:21.697 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: ceph-mgr-dashboard 2026-03-10T09:17:21.697 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T09:17:21.701 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T09:17:21.701 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T09:17:21.702 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T09:17:21.890 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: ceph-mgr-diskprediction-local 2026-03-10T09:17:21.890 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T09:17:21.894 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T09:17:21.894 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T09:17:21.894 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T09:17:22.076 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: ceph-mgr-rook 2026-03-10T09:17:22.076 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T09:17:22.080 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T09:17:22.080 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T09:17:22.080 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T09:17:22.271 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: ceph-mgr-cephadm 2026-03-10T09:17:22.271 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T09:17:22.274 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T09:17:22.275 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T09:17:22.275 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T09:17:22.471 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T09:17:22.471 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T09:17:22.471 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-10T09:17:22.472 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T09:17:22.472 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-10T09:17:22.472 INFO:teuthology.orchestra.run.vm09.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.6 M 2026-03-10T09:17:22.472 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:17:22.472 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-10T09:17:22.472 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T09:17:22.472 INFO:teuthology.orchestra.run.vm09.stdout:Remove 1 Package 2026-03-10T09:17:22.472 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:17:22.472 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 3.6 M 2026-03-10T09:17:22.472 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-10T09:17:22.473 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-10T09:17:22.473 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-10T09:17:22.483 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-10T09:17:22.483 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-10T09:17:22.508 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-10T09:17:22.526 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T09:17:22.618 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T09:17:22.696 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T09:17:22.697 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:17:22.697 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-10T09:17:22.697 INFO:teuthology.orchestra.run.vm09.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:17:22.697 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:17:22.697 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T09:17:22.897 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: ceph-volume 2026-03-10T09:17:22.897 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T09:17:22.901 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T09:17:22.901 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T09:17:22.902 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T09:17:23.106 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T09:17:23.106 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T09:17:23.106 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repo Size 2026-03-10T09:17:23.106 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T09:17:23.106 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-10T09:17:23.106 INFO:teuthology.orchestra.run.vm09.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 456 k 2026-03-10T09:17:23.106 INFO:teuthology.orchestra.run.vm09.stdout:Removing dependent packages: 2026-03-10T09:17:23.106 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 153 k 2026-03-10T09:17:23.106 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:17:23.106 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-10T09:17:23.106 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T09:17:23.106 INFO:teuthology.orchestra.run.vm09.stdout:Remove 2 Packages 2026-03-10T09:17:23.106 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:17:23.106 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 610 k 2026-03-10T09:17:23.106 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-10T09:17:23.108 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-10T09:17:23.108 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-10T09:17:23.119 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-10T09:17:23.119 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-10T09:17:23.144 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-10T09:17:23.146 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T09:17:23.160 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T09:17:23.231 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T09:17:23.232 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T09:17:23.301 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T09:17:23.301 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:17:23.301 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-10T09:17:23.301 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:17:23.301 INFO:teuthology.orchestra.run.vm09.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:17:23.301 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:17:23.301 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T09:17:23.507 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T09:17:23.508 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T09:17:23.508 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repo Size 2026-03-10T09:17:23.508 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T09:17:23.508 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-10T09:17:23.508 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.0 M 2026-03-10T09:17:23.508 INFO:teuthology.orchestra.run.vm09.stdout:Removing dependent packages: 2026-03-10T09:17:23.508 INFO:teuthology.orchestra.run.vm09.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 514 k 2026-03-10T09:17:23.508 INFO:teuthology.orchestra.run.vm09.stdout:Removing unused dependencies: 2026-03-10T09:17:23.508 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 187 k 2026-03-10T09:17:23.508 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:17:23.508 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-10T09:17:23.508 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T09:17:23.508 INFO:teuthology.orchestra.run.vm09.stdout:Remove 3 Packages 2026-03-10T09:17:23.508 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:17:23.508 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 3.7 M 2026-03-10T09:17:23.508 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-10T09:17:23.510 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-10T09:17:23.510 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-10T09:17:23.528 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-10T09:17:23.528 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-10T09:17:23.561 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-10T09:17:23.564 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-10T09:17:23.566 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-10T09:17:23.566 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T09:17:23.641 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T09:17:23.642 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-10T09:17:23.642 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-10T09:17:23.684 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T09:17:23.684 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:17:23.684 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-10T09:17:23.684 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:17:23.684 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:17:23.684 INFO:teuthology.orchestra.run.vm09.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:17:23.684 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:17:23.684 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T09:17:23.864 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: libcephfs-devel 2026-03-10T09:17:23.864 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T09:17:23.867 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T09:17:23.868 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T09:17:23.868 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T09:17:24.065 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T09:17:24.067 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T09:17:24.067 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-10T09:17:24.067 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T09:17:24.067 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-10T09:17:24.067 INFO:teuthology.orchestra.run.vm09.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 12 M 2026-03-10T09:17:24.067 INFO:teuthology.orchestra.run.vm09.stdout:Removing dependent packages: 2026-03-10T09:17:24.067 INFO:teuthology.orchestra.run.vm09.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-10T09:17:24.067 INFO:teuthology.orchestra.run.vm09.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-10T09:17:24.067 INFO:teuthology.orchestra.run.vm09.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 265 k 2026-03-10T09:17:24.067 INFO:teuthology.orchestra.run.vm09.stdout: qemu-kvm-block-rbd x86_64 17:10.1.0-15.el9 @appstream 37 k 2026-03-10T09:17:24.067 INFO:teuthology.orchestra.run.vm09.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 227 k 2026-03-10T09:17:24.067 INFO:teuthology.orchestra.run.vm09.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 490 k 2026-03-10T09:17:24.067 INFO:teuthology.orchestra.run.vm09.stdout:Removing unused dependencies: 2026-03-10T09:17:24.067 INFO:teuthology.orchestra.run.vm09.stdout: boost-program-options x86_64 1.75.0-13.el9 @appstream 276 k 2026-03-10T09:17:24.067 INFO:teuthology.orchestra.run.vm09.stdout: libarrow x86_64 9.0.0-15.el9 @epel 18 M 2026-03-10T09:17:24.067 INFO:teuthology.orchestra.run.vm09.stdout: libarrow-doc noarch 9.0.0-15.el9 @epel 122 k 2026-03-10T09:17:24.067 INFO:teuthology.orchestra.run.vm09.stdout: libnbd x86_64 1.20.3-4.el9 @appstream 453 k 2026-03-10T09:17:24.067 INFO:teuthology.orchestra.run.vm09.stdout: libpmemobj x86_64 1.12.1-1.el9 @appstream 383 k 2026-03-10T09:17:24.067 INFO:teuthology.orchestra.run.vm09.stdout: librabbitmq x86_64 0.11.0-7.el9 @appstream 102 k 2026-03-10T09:17:24.067 INFO:teuthology.orchestra.run.vm09.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-10T09:17:24.068 INFO:teuthology.orchestra.run.vm09.stdout: librdkafka x86_64 1.6.1-102.el9 @appstream 2.0 M 2026-03-10T09:17:24.068 INFO:teuthology.orchestra.run.vm09.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 19 M 2026-03-10T09:17:24.068 INFO:teuthology.orchestra.run.vm09.stdout: lttng-ust x86_64 2.12.0-6.el9 @appstream 1.0 M 2026-03-10T09:17:24.068 INFO:teuthology.orchestra.run.vm09.stdout: parquet-libs x86_64 9.0.0-15.el9 @epel 2.8 M 2026-03-10T09:17:24.068 INFO:teuthology.orchestra.run.vm09.stdout: re2 x86_64 1:20211101-20.el9 @epel 472 k 2026-03-10T09:17:24.068 INFO:teuthology.orchestra.run.vm09.stdout: thrift x86_64 0.15.0-4.el9 @epel 4.8 M 2026-03-10T09:17:24.068 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:17:24.068 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-10T09:17:24.068 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T09:17:24.068 INFO:teuthology.orchestra.run.vm09.stdout:Remove 20 Packages 2026-03-10T09:17:24.068 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:17:24.068 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 79 M 2026-03-10T09:17:24.068 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-10T09:17:24.072 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-10T09:17:24.072 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-10T09:17:24.094 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-10T09:17:24.095 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-10T09:17:24.143 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-10T09:17:24.146 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 1/20 2026-03-10T09:17:24.148 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2/20 2026-03-10T09:17:24.150 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 3/20 2026-03-10T09:17:24.150 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-10T09:17:24.165 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-10T09:17:24.167 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : parquet-libs-9.0.0-15.el9.x86_64 5/20 2026-03-10T09:17:24.169 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 6/20 2026-03-10T09:17:24.171 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-10T09:17:24.172 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 8/20 2026-03-10T09:17:24.174 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libarrow-doc-9.0.0-15.el9.noarch 9/20 2026-03-10T09:17:24.174 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T09:17:24.189 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T09:17:24.190 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-10T09:17:24.190 INFO:teuthology.orchestra.run.vm09.stdout:warning: file /etc/ceph: remove failed: No such file or directory 2026-03-10T09:17:24.190 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:17:24.205 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-10T09:17:24.207 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libarrow-9.0.0-15.el9.x86_64 12/20 2026-03-10T09:17:24.211 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : re2-1:20211101-20.el9.x86_64 13/20 2026-03-10T09:17:24.215 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : lttng-ust-2.12.0-6.el9.x86_64 14/20 2026-03-10T09:17:24.219 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : thrift-0.15.0-4.el9.x86_64 15/20 2026-03-10T09:17:24.222 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libnbd-1.20.3-4.el9.x86_64 16/20 2026-03-10T09:17:24.224 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libpmemobj-1.12.1-1.el9.x86_64 17/20 2026-03-10T09:17:24.226 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : boost-program-options-1.75.0-13.el9.x86_64 18/20 2026-03-10T09:17:24.228 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : librabbitmq-0.11.0-7.el9.x86_64 19/20 2026-03-10T09:17:24.243 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-10T09:17:24.309 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-10T09:17:24.310 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 1/20 2026-03-10T09:17:24.310 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 2/20 2026-03-10T09:17:24.310 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 3/20 2026-03-10T09:17:24.310 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 4/20 2026-03-10T09:17:24.310 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 5/20 2026-03-10T09:17:24.310 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 6/20 2026-03-10T09:17:24.310 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-10T09:17:24.310 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 8/20 2026-03-10T09:17:24.310 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 9/20 2026-03-10T09:17:24.310 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T09:17:24.311 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 11/20 2026-03-10T09:17:24.311 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 12/20 2026-03-10T09:17:24.311 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 13/20 2026-03-10T09:17:24.311 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 14/20 2026-03-10T09:17:24.311 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 15/20 2026-03-10T09:17:24.311 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 16/20 2026-03-10T09:17:24.311 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 17/20 2026-03-10T09:17:24.311 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 18/20 2026-03-10T09:17:24.311 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : re2-1:20211101-20.el9.x86_64 19/20 2026-03-10T09:17:24.366 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 20/20 2026-03-10T09:17:24.366 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:17:24.366 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-10T09:17:24.366 INFO:teuthology.orchestra.run.vm09.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-10T09:17:24.366 INFO:teuthology.orchestra.run.vm09.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-10T09:17:24.366 INFO:teuthology.orchestra.run.vm09.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-10T09:17:24.366 INFO:teuthology.orchestra.run.vm09.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-10T09:17:24.366 INFO:teuthology.orchestra.run.vm09.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-10T09:17:24.366 INFO:teuthology.orchestra.run.vm09.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-10T09:17:24.366 INFO:teuthology.orchestra.run.vm09.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:17:24.366 INFO:teuthology.orchestra.run.vm09.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:17:24.366 INFO:teuthology.orchestra.run.vm09.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-10T09:17:24.367 INFO:teuthology.orchestra.run.vm09.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:17:24.367 INFO:teuthology.orchestra.run.vm09.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-10T09:17:24.367 INFO:teuthology.orchestra.run.vm09.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-10T09:17:24.367 INFO:teuthology.orchestra.run.vm09.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:17:24.367 INFO:teuthology.orchestra.run.vm09.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:17:24.367 INFO:teuthology.orchestra.run.vm09.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:17:24.367 INFO:teuthology.orchestra.run.vm09.stdout: qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 2026-03-10T09:17:24.367 INFO:teuthology.orchestra.run.vm09.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:17:24.367 INFO:teuthology.orchestra.run.vm09.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T09:17:24.367 INFO:teuthology.orchestra.run.vm09.stdout: re2-1:20211101-20.el9.x86_64 2026-03-10T09:17:24.367 INFO:teuthology.orchestra.run.vm09.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-10T09:17:24.367 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T09:17:24.367 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T09:17:24.606 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: librbd1 2026-03-10T09:17:24.606 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T09:17:24.609 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T09:17:24.609 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T09:17:24.609 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T09:17:24.800 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: python3-rados 2026-03-10T09:17:24.800 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T09:17:24.802 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T09:17:24.803 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T09:17:24.803 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T09:17:24.988 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: python3-rgw 2026-03-10T09:17:24.988 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T09:17:24.990 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T09:17:24.991 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T09:17:24.991 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T09:17:25.167 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: python3-cephfs 2026-03-10T09:17:25.168 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T09:17:25.169 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T09:17:25.170 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T09:17:25.170 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T09:17:25.344 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: python3-rbd 2026-03-10T09:17:25.344 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T09:17:25.346 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T09:17:25.347 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T09:17:25.347 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T09:17:25.524 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: rbd-fuse 2026-03-10T09:17:25.524 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T09:17:25.526 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T09:17:25.527 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T09:17:25.527 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T09:17:25.710 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: rbd-mirror 2026-03-10T09:17:25.711 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T09:17:25.713 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T09:17:25.714 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T09:17:25.714 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T09:17:25.910 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: rbd-nbd 2026-03-10T09:17:25.910 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T09:17:25.912 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T09:17:25.913 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T09:17:25.913 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T09:17:25.941 DEBUG:teuthology.orchestra.run.vm09:> sudo yum clean all 2026-03-10T09:17:26.083 INFO:teuthology.orchestra.run.vm09.stdout:56 files removed 2026-03-10T09:17:26.110 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-10T09:17:26.140 DEBUG:teuthology.orchestra.run.vm09:> sudo yum clean expire-cache 2026-03-10T09:17:26.315 INFO:teuthology.orchestra.run.vm09.stdout:Cache was expired 2026-03-10T09:17:26.315 INFO:teuthology.orchestra.run.vm09.stdout:0 files removed 2026-03-10T09:17:26.335 DEBUG:teuthology.parallel:result is None 2026-03-10T09:17:26.336 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm09.local 2026-03-10T09:17:26.336 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-10T09:17:26.359 DEBUG:teuthology.orchestra.run.vm09:> sudo mv -f /etc/yum/pluginconf.d/priorities.conf.orig /etc/yum/pluginconf.d/priorities.conf 2026-03-10T09:17:26.428 DEBUG:teuthology.parallel:result is None 2026-03-10T09:17:26.428 DEBUG:teuthology.run_tasks:Unwinding manager clock 2026-03-10T09:17:26.431 INFO:teuthology.task.clock:Checking final clock skew... 2026-03-10T09:17:26.431 DEBUG:teuthology.orchestra.run.vm09:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T09:17:26.488 INFO:teuthology.orchestra.run.vm09.stderr:bash: line 1: ntpq: command not found 2026-03-10T09:17:26.500 INFO:teuthology.orchestra.run.vm09.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T09:17:26.500 INFO:teuthology.orchestra.run.vm09.stdout:=============================================================================== 2026-03-10T09:17:26.500 INFO:teuthology.orchestra.run.vm09.stdout:^+ mailout04.fischl.online 2 6 377 58 +2584us[+2667us] +/- 41ms 2026-03-10T09:17:26.500 INFO:teuthology.orchestra.run.vm09.stdout:^* ntp4.lwlcom.net 1 6 377 57 -820us[ -737us] +/- 15ms 2026-03-10T09:17:26.500 INFO:teuthology.orchestra.run.vm09.stdout:^+ 141.84.43.73 2 6 377 56 +1561us[+1561us] +/- 21ms 2026-03-10T09:17:26.500 INFO:teuthology.orchestra.run.vm09.stdout:^+ ntp2.lwlcom.net 1 6 377 57 -808us[ -726us] +/- 15ms 2026-03-10T09:17:26.501 DEBUG:teuthology.run_tasks:Unwinding manager ansible.cephlab 2026-03-10T09:17:26.503 INFO:teuthology.task.ansible:Skipping ansible cleanup... 2026-03-10T09:17:26.504 DEBUG:teuthology.run_tasks:Unwinding manager selinux 2026-03-10T09:17:26.505 DEBUG:teuthology.run_tasks:Unwinding manager pcp 2026-03-10T09:17:26.507 DEBUG:teuthology.run_tasks:Unwinding manager internal.timer 2026-03-10T09:17:26.509 INFO:teuthology.task.internal:Duration was 484.835836 seconds 2026-03-10T09:17:26.509 DEBUG:teuthology.run_tasks:Unwinding manager internal.syslog 2026-03-10T09:17:26.511 INFO:teuthology.task.internal.syslog:Shutting down syslog monitoring... 2026-03-10T09:17:26.512 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-10T09:17:26.599 INFO:teuthology.orchestra.run.vm09.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T09:17:26.924 INFO:teuthology.task.internal.syslog:Checking logs for errors... 2026-03-10T09:17:26.924 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm09.local 2026-03-10T09:17:26.924 DEBUG:teuthology.orchestra.run.vm09:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-10T09:17:26.949 INFO:teuthology.task.internal.syslog:Gathering journactl... 2026-03-10T09:17:26.949 DEBUG:teuthology.orchestra.run.vm09:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T09:17:27.633 INFO:teuthology.task.internal.syslog:Compressing syslogs... 2026-03-10T09:17:27.633 DEBUG:teuthology.orchestra.run.vm09:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T09:17:27.661 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T09:17:27.661 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T09:17:27.661 INFO:teuthology.orchestra.run.vm09.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T09:17:27.661 INFO:teuthology.orchestra.run.vm09.stderr: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-10T09:17:27.661 INFO:teuthology.orchestra.run.vm09.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-10T09:17:27.833 INFO:teuthology.orchestra.run.vm09.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 97.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-10T09:17:27.835 DEBUG:teuthology.run_tasks:Unwinding manager internal.sudo 2026-03-10T09:17:27.838 INFO:teuthology.task.internal:Restoring /etc/sudoers... 2026-03-10T09:17:27.838 DEBUG:teuthology.orchestra.run.vm09:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-10T09:17:27.902 DEBUG:teuthology.run_tasks:Unwinding manager internal.coredump 2026-03-10T09:17:27.905 DEBUG:teuthology.orchestra.run.vm09:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-10T09:17:27.968 INFO:teuthology.orchestra.run.vm09.stdout:kernel.core_pattern = core 2026-03-10T09:17:27.983 DEBUG:teuthology.orchestra.run.vm09:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-10T09:17:28.040 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T09:17:28.041 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive 2026-03-10T09:17:28.044 INFO:teuthology.task.internal:Transferring archived files... 2026-03-10T09:17:28.045 DEBUG:teuthology.misc:Transferring archived files from vm09:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/977/remote/vm09 2026-03-10T09:17:28.045 DEBUG:teuthology.orchestra.run.vm09:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-10T09:17:28.114 INFO:teuthology.task.internal:Removing archive directory... 2026-03-10T09:17:28.114 DEBUG:teuthology.orchestra.run.vm09:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-10T09:17:28.168 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive_upload 2026-03-10T09:17:28.170 INFO:teuthology.task.internal:Not uploading archives. 2026-03-10T09:17:28.170 DEBUG:teuthology.run_tasks:Unwinding manager internal.base 2026-03-10T09:17:28.173 INFO:teuthology.task.internal:Tidying up after the test... 2026-03-10T09:17:28.173 DEBUG:teuthology.orchestra.run.vm09:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-10T09:17:28.228 INFO:teuthology.orchestra.run.vm09.stdout: 8532145 0 drwxr-xr-x 3 ubuntu ubuntu 19 Mar 10 09:17 /home/ubuntu/cephtest 2026-03-10T09:17:28.228 INFO:teuthology.orchestra.run.vm09.stdout: 46255746 0 drwxr-xr-x 3 ubuntu ubuntu 22 Mar 10 09:14 /home/ubuntu/cephtest/mnt.0 2026-03-10T09:17:28.228 INFO:teuthology.orchestra.run.vm09.stdout: 51007237 0 drwxr-xr-x 3 ubuntu ubuntu 17 Mar 10 09:14 /home/ubuntu/cephtest/mnt.0/client.0 2026-03-10T09:17:28.228 INFO:teuthology.orchestra.run.vm09.stdout: 37845972 0 drwxr-xr-x 2 ubuntu ubuntu 6 Mar 10 09:14 /home/ubuntu/cephtest/mnt.0/client.0/tmp 2026-03-10T09:17:28.229 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T09:17:28.229 INFO:teuthology.orchestra.run.vm09.stderr:rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty 2026-03-10T09:17:28.229 ERROR:teuthology.run_tasks:Manager failed: internal.base Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/teuthology/teuthology/task/internal/__init__.py", line 53, in base run.wait( File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 485, in wait proc.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm09 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest' 2026-03-10T09:17:28.229 DEBUG:teuthology.run_tasks:Unwinding manager console_log 2026-03-10T09:17:28.232 DEBUG:teuthology.run_tasks:Exception was not quenched, exiting: CommandFailedError: Command failed on vm09 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest' 2026-03-10T09:17:28.233 INFO:teuthology.run:Summary data: description: orch/cephadm/workunits/{0-distro/ubuntu_22.04 agent/off mon_election/classic task/test_iscsi_container/{centos_9.stream test_iscsi_container}} duration: 484.83583641052246 failure_reason: 'Command failed (workunit test cephadm/test_iscsi_pids_limit.sh) on vm09 with status 125: ''mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_iscsi_pids_limit.sh''' flavor: default owner: kyr sentry_event: null status: fail success: false 2026-03-10T09:17:28.233 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-10T09:17:28.255 INFO:teuthology.run:FAIL